共查询到20条相似文献,搜索用时 15 毫秒
1.
Robert Alicki 《Open Systems & Information Dynamics》2007,14(2):223-228
We compare two physical systems: polarization degrees of freedom of a macroscopic light beam and the Josephson junction (JJ)
in the “charge qubit regime”. The first system obviously cannot carry genuine quantum information and we show that the maximal
entanglement which could be encoded into polarization of two light beams scales like 1/(photon number). Two theories of JJ,
one leading to the picture of “JJ-qubit” and the other based on the mean-field approach are discussed. The later, which seems
to be more appropriate, implies that the JJ system is, essentially, mathematically equivalent to the polarization of a light
beam with the number of photons replaced by the number of Cooper pairs. The existing experiments consistent with the “JJ-qubit”
picture and the theoretical arguments supporting, on the contrary, the classical model are briefly discussed. The Franck-Hertz-type
experiment is suggested as an ultimate test of the JJ nature. 相似文献
2.
Robert Alicki 《Open Systems & Information Dynamics》2006,13(2):113-117
Using few very general axioms which should be satisfied by any reasonable theory consistent with the Second Law of Thermodynamics
we argue that: a) “no-cloning theorem” is meaningful for a very general theoretical scheme including both quantum and classical
models, b) in order to describe self-replication, Wigner’s “cloning” process should be replaced by a more general “broadcasting”,
c) “separation of species” is possible only in a non-homogeneous environment, d) “parent” and “offspring” must be strongly
correlated. Motivated by the existing results on broadcasting which show that only classical information can self-replicate
perfectly we discuss briefly a classical toy model with “quantum features” — overlapping pure states and “entangled states”
for composite systems. 相似文献
3.
Paul C. Attie 《Formal Methods in System Design》2011,39(1):1-46
We present a new approach, based on simulation relations, for reasoning about liveness properties of distributed systems.
Our contribution consists of (1) a formalism for defining liveness properties, (2) a proof method for liveness properties
based on that formalism, and (3) two expressive completeness results: our formalism can express any liveness property which
satisfies a natural “robustness” condition; and also any liveness property at all, provided that history variables can be
used. To define liveness, we generalize complemented-pairs (Streett) automata to an infinite state-space, and an infinite
number of complemented-pairs. Our proof method provides two techniques: one for refining liveness properties across levels
of abstraction, and another for refining liveness properties within a level of abstraction. The first is based on extending
simulation relations so that they relate the liveness properties of an abstract automaton to those of a concrete automaton.
The second is based on a deductive method for inferring new liveness properties of an automaton from already established liveness
properties of the same automaton. This deductive method is diagrammatic, and is based on constructing “lattices” of liveness
properties. 相似文献
4.
Christian Licoppe 《Computer Supported Cooperative Work (CSCW)》2006,15(2-3):123-148
Our case study explores the extent to which a “Distributed Cognition”-like ethnographic approach can be used to analyze situations which are not at first sight compatible with the precepts of computational cognition. In the first part of the paper, we analyze the collective listening of phone calls in a helpline. We show why collective listening can be considered a “distributed collective practice”, with a mode of coordination based on repeated verbal re-enactments of difficult phone calls, rather than upon the discrete computational steps normally assumed in the standard model. In the second part of the paper, we analyse the organizational and interactional learning which takes place when collective listening is re-mediated by using e-mail exchanges rather than telephone conversations to communicate distress. Our conclusion discusses critically the viability of the distribution model in a context of collective listening. 相似文献
5.
Jesper M. Johansson 《Information Technology and Management》2000,1(3):183-194
Research in distributed database systems to date has assumed a “variable cost” model of network response time. However, network
response time has two components: transmission time (variable with message size) and latency (fixed). This research improves
on existing models by incorporating a “fixed plus variable cost” model of the network response time. In this research, we:
(1) develop a distributed database design approach that incorporates a “fixed plus variable cost”, network response time function;
(2) run a set of experiments to create designs using this model, and
(3) evaluate the impact the new model had on the design in various types of networks.
This revised version was published online in July 2006 with corrections to the Cover Date. 相似文献
6.
Ulrich Reffle Annette Gotscharek Christoph Ringlstetter Klaus U. Schulz 《International Journal on Document Analysis and Recognition》2009,12(3):165-174
The detection and correction of false friends—also called real-word errors—is a notoriously difficult problem. On realistic
data, the break-even point for automatic correction so far could not be reached: the number of additional infelicitous corrections
outnumbered the useful corrections. We present a new approach where we first compute a profile of the error channel for the
given text. During the correction process, the profile (1) helps to restrict attention to a small set of “suspicious” lexical
tokens of the input text where it is “plausible” to assume that the token represents a false friend. In this way, recognition
of false friends is improved. Furthermore, the profile (2) helps to isolate the “most promising” correction suggestion for
“suspicious” tokens. Using a conventional word trigram statistics for disambiguation we obtain a correction method that can
be successfully applied to unrestricted text. In experiments for OCR documents, we show significant accuracy gains by fully
automatic correction of false friends. 相似文献
7.
Prediction of compressive and tensile strength of Gaziantep basalts via neural networks and gene expression programming 总被引:1,自引:0,他引:1
In this paper, two soft computing approaches, which are known as artificial neural networks and Gene Expression Programming
(GEP) are used in strength prediction of basalts which are collected from Gaziantep region in Turkey. The collected basalts
samples are tested in the geotechnical engineering laboratory of the University of Gaziantep. The parameters, “ultrasound
pulse velocity”, “water absorption”, “dry density”, “saturated density”, and “bulk density” which are experimentally determined
based on the procedures given in ISRM (Rock characterisation testing and monitoring. Pergamon Press, Oxford, 1981) are used
to predict “uniaxial compressive strength” and “tensile strength” of Gaziantep basalts. It is found out that neural networks
are quite effective in comparison to GEP and classical regression analyses in predicting the strength of the basalts. The
results obtained are also useful in characterizing the Gaziantep basalts for practical applications. 相似文献
8.
W. M. P. van der Aalst V. Rubin H. M. W. Verbeek B. F. van Dongen E. Kindler C. W. Günther 《Software and Systems Modeling》2010,9(1):87-111
Process mining includes the automated discovery of processes from event logs. Based on observed events (e.g., activities being
executed or messages being exchanged) a process model is constructed. One of the essential problems in process mining is that
one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at
finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such “overfitting” by generalizing the model to allow for more behavior.
This generalization is often driven by the representation language and very crude assumptions about completeness. As a result,
parts of the model are “overfitting” (allow only for what has actually been observed) while other parts may be “underfitting”
(allow for much more behavior without strong support for it). None of the existing techniques enables the user to control
the balance between “overfitting” and “underfitting”. To address this, we propose a two-step approach. First, using a configurable
approach, a transition system is constructed. Then, using the “theory of regions”, the model is synthesized. The approach
has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches. 相似文献
9.
Tomoko Itao Satoshi Tanaka Tatsuya Suda Atsushi Yamamoto 《International Journal on Digital Libraries》2006,6(3):270-279
We describe a mechanism called SpaceGlue for adaptively locating services based on the preferences and locations of users
in a distributed and dynamic network environment. In SpaceGlue, services are bound to physical locations, and a mobile user
accesses local services depending on the current space he/she is visiting. SpaceGlue dynamically identifies the relationships
between different spaces and links or “glues” spaces together depending on how previous users moved among them and used those
services. Once spaces have been glued, users receive a recommendation of remote services (i.e., services provided in a remote
space) reflecting the preferences of the crowd of users visiting the area. The strengths of bonds are implicitly evaluated
by users and adjusted by the system on the basis of their evaluation. SpaceGlue is an alternative to existing schemes such
as data mining and recommendation systems and it is suitable for distributed and dynamic environments. The bonding algorithm
for SpaceGlue incrementally computes the relationships or “bonds” between different spaces in a distributed way. We implemented
SpaceGlue using a distributed network application platform Ja-Net and evaluated it by simulation to show that it adaptively
locates services reflecting trends in user preferences. By using “Mutual Information (MI)” and “F-measure” as measures to indicate the level of such trends and the accuracy of service recommendation, the simulation results
showed that (1) in SpaceGlue, the F-measure increases depending on the level of MI (i.e., the more significant the trends, the greater the F-measure values), (2) SpaceGlue achives better precision and F-measure than “Flooding case (i.e., every service information is broadcast to everybody)” and “No glue case” by narrowing
appropriate partners to send recommendations based on bonds, and (3) SpaceGlue achieves better F-measure with large number of spaces and users than other cases (i.e., “flooding” and “no glue”).
Tomoko Itao is an alumna of NTT Network Innovation Laboratories 相似文献
10.
Rens Bod 《Minds and Machines》2007,17(1):47-66
This paper deals with the problem of derivational redundancy in scientific explanation, i.e. the problem that there can be
extremely many different explanatory derivations for a natural phenomenon while students and experts mostly come up with one
and the same derivation for a phenomenon (modulo the order of applying laws). Given this agreement among humans, we need to
have a story of how to select from the space of possible derivations of a phenomenon the derivation that humans come up with.
In this paper we argue that the problem of derivational redundancy can be solved by a new notion of “shortest derivation”,
by which we mean the derivation that can be constructed by the fewest (and therefore largest) partial derivations of previously
derived phenomena that function as “exemplars”. We show how the exemplar-based framework known as “Data-Oriented Parsing”
or “DOP” can be employed to select the shortest derivation in scientific explanation. DOP’s shortest derivation of a phenomenon
maximizes what is called the “derivational similarity” between a phenomenon and a corpus of exemplars. A preliminary investigation
with exemplars from classical and fluid mechanics shows that the shortest derivation closely corresponds to the derivations
that humans construct. Our approach also proposes a concrete solution to Kuhn’s problem of how we know on which exemplar a
phenomenon can be modeled. We argue that humans model a phenomenon on the exemplar that is derivationally most similar to
the phenomenon, i.e. the exemplar from which the largest subtree(s) can be used to derive the phenomenon. 相似文献
11.
About the Collatz conjecture 总被引:1,自引:0,他引:1
This paper refers to the Collatz conjecture. The origin and the formalization of the Collatz problem are presented in the
first section, named “Introduction”. In the second section, entitled “Properties of the Collatz function”, we treat mainly
the bijectivity of the Collatz function. Using the obtained results, we construct a (set of) binary tree(s) which “simulate(s)”–
in a way that will be specified – the computations of the values of the Collatz function. In the third section, we give an
“efficient” algorithm for computing the number of iterations (recursive calls) of the Collatz function. A comparison between
our algorithm and the standard one is also presented, the first being at least 2.25 “faster” (3.00 in medium). Finally, we
describe a class of natural numbers for which the conjecture is true.
Received 28 April 1997 / 10 June 1997 相似文献
12.
Ryuzo Azuma Ryo Umetsu Shingo Ohki Fumikazu Konishi Sumi Yoshikawa Akihiko Konagaya Kazumi Matsumura 《New Generation Computing》2007,25(4):425-441
This paper proposes a novel approach to the analysis and validation of mathematical models using two-dimensional geometrical
patterns representing parameter-parameter dependencies (PPD) in dynamic systems. A geometrical pattern is obtained by calculating
moment values, such as the area under the curve (AUC), area under the moment curve (AUMC), and mean residence time (MRT),
for a series of simulations with a wide range of parameter values. In a mathematical model of the metabolic pathways of the
cancer drug irinotecan (CPT11), geometrical patterns can be classified into three major categories:
“independent,” “hyperbolic,” and “complex.” These categories characterize substructures arising in differential equations,
and are helpful for understanding the behavior of large-scale mathematical models. The Open Bioinformatics Grid (OBIGrid)
provides a cyber-infrastructure for users to share these data as well as computational resources. 相似文献
13.
Raz Tamir Yehuda Singer 《The VLDB Journal The International Journal on Very Large Data Bases》2006,15(1):40-52
This article presents a new interestingness measure for association rules called confidence gain (CG). Focus is given to extraction
of human associations rather than associations between market products. There are two main differences between the two (human
and market associations). The first difference is the strong asymmetry of human associations (e.g., the association “shampoo”
→ “hair” is much stronger than “hair” → “shampoo”), where in market products asymmetry is less intuitive and less evident.
The second is the background knowledge humans employ when presented with a stimulus (input phrase).
CG calculates the local confidence of a given term compared to its average confidence throughout a given database. CG is found
to outperform several association measures since it captures both the asymmetric notion of an association (as in the confidence
measure) while adding the comparison to an expected confidence (as in the lift measure). The use of average confidence introduces
the “background knowledge” notion into the CG measure.
Various experiments have shown that CG and local confidence gain (a low-complexity version of CG) successfully generate association
rules when compared to human free associations. The experiments include a large-scale “free sssociation Turing test” where
human free associations were compared to associations generated by the CG and other association measures. Rules discovered
by CG were found to be significantly better than those discovered by other measures.
CG can be used for many purposes, such as personalization, sense disambiguation, query expansion, and improving classification
performance of small item sets within large databases.
Although CG was found to be useful for Internet data retrieval, results can be easily used over any type of database.
Edited by J. Srivastava 相似文献
14.
Tatsuyuki Kawamura Tomohiro Fukuhara Hideaki Takeda Yasuyuki Kono Masatsugu Kidode 《Personal and Ubiquitous Computing》2007,11(4):287-298
In this paper we propose an object-triggered human memory augmentation system named “Ubiquitous Memories” that enables a user
to directly associate his/her experience data with physical objects by using a “touching” operation. A user conceptually encloses
his/her experiences gathered through sense organs into physical objects by simply touching an object. The user can also disclose
and re-experience for himself/herself the experiences accumulated in an object by the same operation. We implemented a prototype
system composed basically of a radio frequency identification (RFID) device. Physical objects are also attached to RFID tags.
We conducted two experiments. The first experiment confirms a succession of the “encoding specificity principle,” which is
well known in the research field of psychology, to the Ubiquitous Memories system. The second experiment aims at a clarification
of the system’s characteristics by comparing the system with other memory externalization strategies. The results show the
Ubiquitous Memories system is effective for supporting memorization and recollection of contextual events. 相似文献
15.
Two new modeling and simulation approaches for Simultaneous Switching Noise (SSN) are described and compared to “brute force”
simulation by SPICE. Both simulation accuracy and simulation run-time are considered. The two new approaches are: 1) the “effective
inductance” method, in which an approximate, very efficient method of extracting an SSN L
eff
is utilized; and 2) the “macromodel” method, in which the complex inductance network responsible for SSN is represented by
only a few dominant poles in the frequency domain and the time domain response is obtained by an efficient convolution algorithm.
Both approaches are shown to be accurate and fast, but only the effective inductance algorithm is robust in numerical convergence.
Received: 19 March 1997 / Accepted: 25 March 1997 相似文献
16.
We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences,
or identifying salient patterns in images. The term “irregular” depends on the context in which the “regular” or “valid” are
defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context.
We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a
new observed image region or a new video segment (“the query”) using chunks of data (“pieces of puzzle”) extracted from previous
visual examples (“the database”). Regions in the observed data which can be composed using large contiguous chunks of data
from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database
(or can be composed, but only using small fragmented pieces) are regarded as unlikely/suspicious. The problem is posed as
an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in
images and video, for detecting suspicious behaviors and for automatic visual inspection for quality assurance.
Patent Pending 相似文献
17.
In recent years, on-demand transport systems (such as a demand-bus system) are focused as a new transport service in Japan.
An on-demand vehicle visits pick-up and delivery points by door-to-door according to the occurrences of requests. This service
can be regarded as a cooperative (or competitive) profit problem among transport vehicles. Thus, a decision-making for the
problem is an important factor for the profits of vehicles (i.e., drivers). However, it is difficult to find an optimal solution
of the problem, because there are some uncertain risks, e.g., the occurrence probability of requests and the selfishness of
other rival vehicles. Therefore, this paper proposes a transport policy for on-demand vehicles to control the uncertain risks.
First, we classify the profit of vehicles as “assured profit” and “potential profit”. Second, we propose a “profit policy”
and “selection policy” based on the classification of the profits. Moreover, the selection policy can be classified into “greed”,
“mixed”, “competitive”, and “cooperative”. These selection policies are represented by selection probabilities of the next
visit points to cooperate or compete with other vehicles. Finally, we report simulation results and analyze the effectiveness
of our proposal policies. 相似文献
18.
Classic distributed computing abstractions do not match well the reality of digital logic gates, which are the elementary
building blocks of Systems-on-Chip (SoCs) and other Very Large Scale Integrated (VLSI) circuits: Massively concurrent, continuous
computations undermine the concept of sequential processes executing sequences of atomic zero-time computing steps, and very
limited computational resources at gate-level make even simple operations prohibitively costly. In this paper, we introduce
a modeling and analysis framework based on continuous computations and zero-bit message channels, and employ this framework
for the correctness & performance analysis of a distributed fault-tolerant clocking approach for Systems-on-Chip (SoCs). Starting
out from a “classic” distributed Byzantine fault-tolerant tick generation algorithm, we show how to adapt it for direct implementation
in clockless digital logic, and rigorously prove its correctness and derive analytic expressions for worst case performance
metrics like synchronization precision and clock frequency. Rather than on absolute delay values, both the algorithm’s correctness
and the achievable synchronization precision depend solely on the ratio of certain path delays. Since these ratios can be
mapped directly to placement & routing constraints, there is typically no need for changing the algorithm when migrating to
a faster implementation technology and/or when using a slightly different layout in an SoC. 相似文献
19.
Deadlock detection in distributed database systems: a new algorithm and a comparative performance analysis 总被引:4,自引:0,他引:4
Natalija Krivokapić Alfons Kemper Ehud Gudes 《The VLDB Journal The International Journal on Very Large Data Bases》1999,8(2):79-100
This paper attempts a comprehensive study of deadlock detection in distributed database systems. First, the two predominant
deadlock models in these systems and the four different distributed deadlock detection approaches are discussed. Afterwards,
a new deadlock detection algorithm is presented. The algorithm is based on dynamically creating deadlock detection agents (DDAs), each being responsible for detecting deadlocks in one connected component of the global wait-for-graph (WFG). The
DDA scheme is a “self-tuning” system: after an initial warm-up phase, dedicated DDAs will be formed for “centers of locality”,
i.e., parts of the system where many conflicts occur. A dynamic shift in locality of the distributed system will be responded
to by automatically creating new DDAs while the obsolete ones terminate. In this paper, we also compare the most competitive
representative of each class of algorithms suitable for distributed database systems based on a simulation model, and point
out their relative strengths and weaknesses. The extensive experiments we carried out indicate that our newly proposed deadlock
detection algorithm outperforms the other algorithms in the vast majority of configurations and workloads and, in contrast
to all other algorithms, is very robust with respect to differing load and access profiles.
Received December 4, 1997 / Accepted February 2, 1999 相似文献
20.
Nonlocal Image and Movie Denoising 总被引:3,自引:0,他引:3
Antoni Buades Bartomeu Coll Jean-Michel Morel 《International Journal of Computer Vision》2008,76(2):123-139
Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object
of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes.
A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters
will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood
filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”,
specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according
to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle,
which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier
analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters.
This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare
the performance of all neighborhood filters.
The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that
only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the
only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality.
A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods.
It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say
that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture
problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and
restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details. 相似文献