共查询到20条相似文献,搜索用时 31 毫秒
1.
W. M. P. van der Aalst V. Rubin H. M. W. Verbeek B. F. van Dongen E. Kindler C. W. Günther 《Software and Systems Modeling》2010,9(1):87-111
Process mining includes the automated discovery of processes from event logs. Based on observed events (e.g., activities being
executed or messages being exchanged) a process model is constructed. One of the essential problems in process mining is that
one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at
finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such “overfitting” by generalizing the model to allow for more behavior.
This generalization is often driven by the representation language and very crude assumptions about completeness. As a result,
parts of the model are “overfitting” (allow only for what has actually been observed) while other parts may be “underfitting”
(allow for much more behavior without strong support for it). None of the existing techniques enables the user to control
the balance between “overfitting” and “underfitting”. To address this, we propose a two-step approach. First, using a configurable
approach, a transition system is constructed. Then, using the “theory of regions”, the model is synthesized. The approach
has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches. 相似文献
2.
Torben Amtoft 《Higher-Order and Symbolic Computation》2008,21(4):411-442
The Ambient Calculus was developed by Cardelli and Gordon as a formal framework to study issues of mobility and migrant code.
Numerous analyses have been developed for numerous variants of that calculus. We take up the challenge of developing, in a
type-based setting, a relatively precise “topology” analysis for the original version of the calculus. To compensate for the lack of “co-capabilities” (an otherwise increasingly popular extension), the
analysis is flow-sensitive, with the actions of processes being summarized by “behaviors”.
A subject reduction property guarantees that for a well-typed process, the location of any ambient is included in what is
predicted by its type; additionally it ensures that communicating subprocesses agree on their “topic of conversation”. Based
on techniques borrowed from finite automata theory, type checking of type-annotated processes is decidable (though potentially
exponential). 相似文献
3.
Tomoko Itao Satoshi Tanaka Tatsuya Suda Atsushi Yamamoto 《International Journal on Digital Libraries》2006,6(3):270-279
We describe a mechanism called SpaceGlue for adaptively locating services based on the preferences and locations of users
in a distributed and dynamic network environment. In SpaceGlue, services are bound to physical locations, and a mobile user
accesses local services depending on the current space he/she is visiting. SpaceGlue dynamically identifies the relationships
between different spaces and links or “glues” spaces together depending on how previous users moved among them and used those
services. Once spaces have been glued, users receive a recommendation of remote services (i.e., services provided in a remote
space) reflecting the preferences of the crowd of users visiting the area. The strengths of bonds are implicitly evaluated
by users and adjusted by the system on the basis of their evaluation. SpaceGlue is an alternative to existing schemes such
as data mining and recommendation systems and it is suitable for distributed and dynamic environments. The bonding algorithm
for SpaceGlue incrementally computes the relationships or “bonds” between different spaces in a distributed way. We implemented
SpaceGlue using a distributed network application platform Ja-Net and evaluated it by simulation to show that it adaptively
locates services reflecting trends in user preferences. By using “Mutual Information (MI)” and “F-measure” as measures to indicate the level of such trends and the accuracy of service recommendation, the simulation results
showed that (1) in SpaceGlue, the F-measure increases depending on the level of MI (i.e., the more significant the trends, the greater the F-measure values), (2) SpaceGlue achives better precision and F-measure than “Flooding case (i.e., every service information is broadcast to everybody)” and “No glue case” by narrowing
appropriate partners to send recommendations based on bonds, and (3) SpaceGlue achieves better F-measure with large number of spaces and users than other cases (i.e., “flooding” and “no glue”).
Tomoko Itao is an alumna of NTT Network Innovation Laboratories 相似文献
4.
Shared counters are among the most basic coordination structures in distributed computing. Known implementations of shared
counters are either blocking, non-linearizable, or have a sequential bottleneck. We present the first counter algorithm that
is both linearizable, non-blocking, and can provably achieve high throughput in k-synchronous executions—executions in which process speeds vary by at most a constant factor k. The algorithm is based on a novel variation of the software combining paradigm that we call bounded-wait combining (BWC). It can thus be used to obtain implementations, possessing the same properties, of any object that supports combinable
operations, such as a stack or a queue. Unlike previous combining algorithms where processes may have to wait for each other
indefinitely, in the BWC algorithm, a process only waits for other processes for a bounded period of time and then “takes
destiny in its own hands”. In order to reason rigorously about the parallelism attainable by our algorithm, we define a novel
metric for measuring the throughput of shared objects, which we believe is interesting in its own right. We use this metric
to prove that our algorithm achieves throughput of Ω(N/ log N) in k-synchronous executions, where N is the number of processes that can participate in the algorithm. Our algorithm uses two tools that we believe may prove
useful for obtaining highly parallel non-blocking implementation of additional objects. The first are “synchronous locks”,
locks that are respected by processes only in k-synchronous executions and are disregarded otherwise; the second are “pseduo-transactions”—a weakening of regular transactions
that allows higher parallelism.
A preliminary version of this paper appeared in [11].
D. Hendler is supported in part by a grant from the Israel Science Foundation. S. Kutten is supported in part by a grant from
the Israel Science Foundation. 相似文献
5.
Israel A. Wagner Michael Lindenbaum Alfred M. Bruckstein 《Annals of Mathematics and Artificial Intelligence》1998,24(1-4):211-223
Efficient graph search is a central issue in many aspects of AI. In most of existing work there is a distinction between the
active “searcher”, which both executes the algorithm and holds the memory, and the passive “searched graph”, over which the
searcher has no control at all. Large dynamic networks like the Internet, where the nodes are powerful computers and the links
have narrow bandwidth and are heavily-loaded, call for a different paradigm, in which most of the burden of computing and
memorizing is moved from the searching agent to the nodes of the network. In this paper we suggest a method for searching
an undirected, connected graph using the Vertex-Ant-Walk method, where an a(ge)nt walks along the edges of a graph G, occasionally leaving “pheromone” traces at nodes, and using those traces to guide its exploration. We show that the ant
can cover the graph within time O(nd), where n is the number of vertices and d the diameter of G. The use of traces achieves a trade-off between random and self-avoiding walks, as it dictates a lower priority for already-visited
neighbors. Further properties of the suggested method are: (a) modularity: a group of searching agents, each applying the
same protocol, can cooperate on a mission of covering a graph with minimal explicit communication between them; (b) possible
convergence to a limit cycle: a Hamiltonian path in G (if one exists) is a possible limit cycle of the process.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
6.
In this paper we introduce a general framework for casting fully dynamic transitive closure into the problem of reevaluating
polynomials over matrices. With this technique, we improve the best known bounds for fully dynamic transitive closure. In
particular, we devise a deterministic algorithm for general directed graphs that achieves O(n
2) amortized time for updates, while preserving unit worst-case cost for queries. In case of deletions only, our algorithm
performs updates faster in O(n) amortized time. We observe that fully dynamic transitive closure algorithms with O(1) query time maintain explicitly the transitive closure of the input graph, in order to answer each query with exactly one
lookup (on its adjacency matrix). Since an update may change as many as Ω(n
2) entries of this matrix, no better bounds are possible for this class of algorithms.
This work has been partially supported by the Sixth Framework Programme of the EU under contract number 507613 (Network of
Excellence “EuroNGI: Designing and Engineering of the Next Generation Internet”), and number 001907 (“DELIS: Dynamically Evolving,
Large Scale Information Systems”), and by the Italian Ministry of University and Research (Project “ALGO-NEXT: Algorithms
for the Next Generation Internet and Web: Methodologies, Design and Experiments”). Portions of this paper have been presented
at the 41st Annual Symp. on Foundations of Computer Science, 2000. 相似文献
7.
In highly dynamic and heterogeneous wireless mesh networks (WMN), link quality will seriously affect network performance.
Two challenges hinder us from achieving a highly efficient WMN. One is the channel dynamics. As in real network deployment,
channel qualities are changing over time, which would seriously affect network bandwidth and reliability. Existing works are
limited to the assumption that link quality values are fixed, and optimal scheduling algorithms are working on the fixed values,
which would inevitably suffer from the link quality dynamics. Another challenge is the channel diversity. In single channel
wireless networks, channel assignment and scheduling are
NP\mathcal{NP}
-hard. And in multichannel wireless networks, it could be even harder for higher throughput and efficient scheduling. In this
study, we firstly characterize the stochastic behavior on wireless communications in a Markov process, which is based on statistical
methodology. Secondly, on exploiting the stochastic behavior on wireless channels, we propose a stochastic programming model
in achieving maximized network utilization. Considering the
NP\mathcal{NP}
-hardness, we propose a heuristic solution for it. The key idea in the proposed algorithm is a two-stage matching process
named “Rematch.” Indeed, our solution to the stochastic network scheduling is a cross-layer approach. Also, we have proved
that it is 2-approximate to the optimal result. Moreover, extensive simulations have been done, showing the efficiency of
“Rematch” in highly dynamic and distributed wireless mesh networks. 相似文献
8.
Jeansoulin R. Wurbel E. 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2003,7(6):386-393
The environmental data are in general imprecise and uncertain, but they are located in space and therefore obey to spatial
constraints. The “spatial analysis” is a (natural) reasoning process through which geographers take advantage of these constraints
to reduce this uncertainty and to improve their beliefs. Trying to automate this process is a really hard problem. We propose
here the design of a revision operator able to perform a spatial analysis in the context of one particular “application profile”:
it identifies objects bearing a same variable bound through local constraints. The formal background, on which this operator
is built, is a decision algorithm from Reiter [9]; then the heuristics, which help this algorithm to become tractable on a
true scale application, are special patterns for clauses and “spatial confinement” of conflicts. This operator is “anytime”,
because it uses “samples” and works on small (tractable) blocks, it reaggregates the partial revision results on larger blocks,
thus we name it a “hierarchical block revision” operator. Finally we illustrate a particular application: a flooding propagation.
Of course this is among possible approaches of “soft-computing” for geographic applications.
On leave at: Centre de Recherche en Géomatique Pavillon Casault, Université Laval Québec, Qc, Canada – G1K 7P4
Université de Toulon et du Var, Avenue de l'Université, BP 132, 83957 La Garde Cedex, France
This work is currently supported by the European Community under the IST-1999-14189 project. 相似文献
9.
A highly practical parallel signcryption scheme named PLSC from trapdoor per- mutations (TDPs for short) was built to perform long messages directly. The new scheme follows the idea “scramble all, and encrypt small”, using some scrambling operation on message m along with the user’s identities, and then passing, in par- allel, small parts of the scrambling result through corresponding TDPs. This design enables the scheme to flexibly perform long messages of arbitrary length while avoid repeatedly invoking TDP operations such as the CBC mode, or verbosely black-box composing symmetric encryption and signcryption, resulting in notice- able practical savings in both message bandwidth and efficiency. Concretely, the signcryption scheme requires exactly one computation of the “receiver’s TDP” (for “encryption”) and one inverse computation of the “sender’s TDP” (for “authentica- tion”), which is of great practical significance in directly performing long messages, since the major bottleneck for many public encryption schemes is the excessive computational overhead of performing TDP operations. Cutting out the verbosely repeated padding, the newly proposed scheme is more efficient than a black-box hybrid scheme. Most importantly, the proposed scheme has been proven to be tightly semantically secure under adaptive chosen ciphertext attacks (IND-CCA2) and to provide integrity of ciphertext (INT-CTXT) as well as non-repudiation in the random oracle model. All of these security guarantees are provided in the full multi-user, insider-security setting. Moreover, though the scheme is designed to perform long messages, it may also be appropriate for settings where it is imprac- tical to perform large block of messages (i.e. extremely low memory environments such as smart cards). 相似文献
10.
Miomir Vukobratović Veljko Potkonjak Kalman Babković Branislav Borovac 《Multibody System Dynamics》2007,17(1):71-96
In the last decade we have witnessed a rapid growth of Humanoid Robotics, which has already constituted an autonomous research field. Humanoid robots (or simply humanoids) are expected in all situations of humans’ everyday life, “living” and cooperating with us. They will work in services, in
homes, and hospitals, and they are even expected to get involved in sports. Hence, they will have to be capable of doing diverse
kinds of tasks. This forces the researchers to develop an appropriate mathematical model to support simulation, design, and
control of these systems. Another important fact is that today’s, and especially tomorrow’s, humanoid robots will be more
and more humanlike in their shape and behavior. A dynamic model developed for an advanced humanoid robot may become a very
useful tool for the dynamic analysis of human motion in different tasks (walking, running and jumping, manipulation, various
sports, etc.). So, we derive a general model and talk about a human-and-humanoid simulation system. The basic idea is to start
from a human/humanoid considered as a free spatial system (“flier”). Particular problems (walking, jumping, etc.) are then
considered as different contact tasks – interaction between the flier and various objects (being either single bodies or separate
dynamic systems). 相似文献
11.
M. Bedenbecker R. Bandorf G. Bräuer H. Lüthje H. H. Gatzen 《Microsystem Technologies》2008,14(12):1949-1954
An important aspect of the development of electromagnetic microactuators is the search for suitable materials as well as the
development of the respective deposition and patterning processes. Within the Collaborative Research Center 516 “Design and
Fabrication of Active Microsystems”, it is the task of the subproject B1 “fabrication of magnetic thin films for electromagnetic
microactuators” to perform these investigations. The materials of interest can be divided into two groups: hard magnetic materials
and soft magnetic materials. Materials with optimized properties and fabrication processes have been developed within both
groups. An example is Samarium–Cobalt (SmCo), which can either be deposited using magnetron sputtering as Sm2Co17 with a very high energy product or in the SmCo5 phase using gas flow sputtering with very high deposition rates. In the area of soft magnetic materials, investigations on
Nickel-Iron (NiFe) especially NiFe81/19 were followed by the evaluation of NiFe45/55, which features a higher saturation flux
density B
s and relative permeability μ
r. Furthermore, current investigations focus on Cobalt-Iron (CoFe) and its further increased saturation flux density B
s and relative permeability μ
r. Current tasks include the stabilization of the fabrication processes to achieve good material properties (i.e. electroplating
of CoFe) or a shortening (e.g. by using heated substrates during deposition) by using process alternative not used so far.
Another topic is the integration into fabrication processes, i.e. the investigation of process stability and compatibility. 相似文献
12.
Electron Beam Welding is a well-established bonding method in macro applications. The properties of this method make it very promising for Microsystems, the very small possible beam diameter predestine this technology for the realisation of smallest spot or line welds of metal and also of non-metal materials with locally concentrated energy input into the component. The wearless and inertia-free beam allows highly dynamic joining processes. Where even smallest particles of dust can influence the results, the processing inside a vacuum chamber protect the components to a high degree from contamination and oxide formation during the welding cycle. In this study, the process is downscaled by modifying a scanning electron microscope (SEM), which operates according to the same principle as an electron beam welding machine. This creates the possibility to observe and weld the components with the same device. The necessary modifications are investigated, as well as the possibilities to expand the device, which makes it a promising tool for the flexible production of hybrid Microsystems. The presented investigations and results are acquired within the project SFB 440 “Assembly of Hybrid Microsystems”, financed by the “German Society for Research” (DFG). 相似文献
13.
We introduce nondeterministic graph searching with a controlled amount of nondeterminism and show how this new tool can be
used in algorithm design and combinatorial analysis applying to both pathwidth and treewidth. We prove equivalence between
this game-theoretic approach and graph decompositions called q
-branched tree decompositions, which can be interpreted as a parameterized version of tree decompositions. Path decomposition and (standard)
tree decomposition are two extreme cases of q-branched tree decompositions. The equivalence between nondeterministic graph searching and q-branched tree decomposition enables us to design an exact (exponential time) algorithm computing q-branched treewidth for all q≥0, which is thus valid for both treewidth and pathwidth. This algorithm performs as fast as the best known exact algorithm
for pathwidth. Conversely, this equivalence also enables us to design a lower bound on the amount of nondeterminism required
to search a graph with the minimum number of searchers.
Additional support of F.V. Fomin by the Research Council of Norway.
Additional supports of P. Fraigniaud from the INRIA Project “Grand Large”, and from the Project PairAPair of the ACI “Masse de Données”.
Additional supports of N. Nisse from the Project Fragile of the ACI “Sécurité & Informatique”. 相似文献
14.
Antti Oulasvirta Tye Rattenbury Lingyi Ma Eeva Raita 《Personal and Ubiquitous Computing》2012,16(1):105-114
Examining several sources of data on smartphone use, this paper presents evidence for the popular conjecture that mobile devices
are “habit-forming.” The form of habits we identified is called a checking habit: brief, repetitive inspection of dynamic content quickly accessible on the device. We describe findings on kinds and frequencies
of checking behaviors in three studies. We found that checking habits occasionally spur users to do other things with the
device and may increase usage overall. Data from a controlled field experiment show that checking behaviors emerge and are
reinforced by informational “rewards” that are very quickly accessible. Qualitative data suggest that although repetitive
habitual use is frequent, it is experienced more as an annoyance than an addiction. We conclude that supporting habit-formation
is an opportunity for making smartphones more “personal” and “pervasive.” 相似文献
15.
Lorenzo Magnani 《Minds and Machines》2009,19(4):477-493
What I call semiotic brains are brains that make up a series of signs and that are engaged in making or manifesting or reacting to a series of signs:
through this semiotic activity they are at the same time engaged in “being minds” and so in thinking intelligently. An important
effect of this semiotic activity of brains is a continuous process of disembodiment of mind that exhibits a new cognitive
perspective on the mechanisms underling the semiotic emergence of meaning processes. Indeed at the roots of sophisticated
thinking abilities there is a process of disembodiment of mind that presents a new cognitive perspective on the role of external
models, representations, and various semiotic materials. Taking advantage of Turing’s comparison between “unorganized” brains
and “logical” and “practical” machines” this paper illustrates the centrality to cognition of the disembodiment of mind from
the point of view of the interplay between internal and external representations, both mimetic and creative. The last part
of the paper describes the concept of mimetic mind I have introduced to shed new cognitive and philosophical light on the role of computational modeling and on the decline
of the so-called Cartesian computationalism. 相似文献
16.
17.
18.
Mirela Damian Robin Flatland Joseph O’Rourke Suneeta Ramaswami 《Theory of Computing Systems》2010,47(3):674-695
We show that the space of polygonizations of a fixed planar point set S of n points is connected by O(n
2) “moves” between simple polygons. Each move is composed of a sequence of atomic moves called “stretches” and “twangs,” which
walk between weakly simple “polygonal wraps” of S. These moves show promise to serve as a basis for generating random polygons. 相似文献
19.
X-machines were proposed by Holcombe as a possible specification language and since then a number of further investigations
have demonstrated that the model is intuitive and easy to use. In particular, stream X-machines (SXM), a particular class of X-machines, have been found to be extremely useful in practice. Furthermore, a method of testing
systems specified as SXMs exists and is proved to detect all faults of the implementation provided that the system meets certain
“design for test conditions”. Recently, a system of communicating SXMs was introduced as a means of modelling parallel processing. This paper proves that each communicating machine component can
be transformed in a straightforward manner so that the entire system will behave like a single stream X-machine - the equivalent
SXM of the system. The paper goes on to investigate the applicability of the SXM testing method to a system of communicating
SXMs and identifies a class of communicating SXMs for which the equivalent SXM of the system meets the “design for test conditions”.
Received November 1999 / Accepted in revised form June 2001 相似文献
20.
Problems in fault-tolerant distributed computing have been studied in a variety of models. These models are structured around
two central ideas: (1) degree of synchrony and failure model are two independent parameters that determine a particular type of system, (2) the notion of faulty component is helpful and even necessary for the analysis of distributed computations when faults occur. In this work, we question these
two basic principles of fault-tolerant distributed computing, and show that it is both possible and worthy to renounce them
in the context of benign faults: we present a computational model based only on the notion of transmission faults. In this model, computations evolve in rounds, and messages missed in a round are lost. Only information transmission is
represented: for each round r and each process p, our model provides the set of processes that p “hears of” at round r (heard-of set), namely the processes from which p receives some message at round r. The features of a specific system are thus captured as a whole, just by a predicate over the collection of heard-of sets.
We show that our model handles benign failures, be they static or dynamic, permanent or transient, in a unified framework.
We demonstrate how this approach leads to shorter and simpler proofs of important results (non-solvability, lower bounds).
In particular, we prove that the Consensus problem cannot be generally solved without an implicit and permanent consensus
on heard-of sets. We also examine Consensus algorithms in our model. In light of this specific agreement problem, we show
how our approach allows us to devise new interesting solutions.
A. Schiper’s research funded by the Swiss National Science Foundation under grant number 200021-111701 and Hasler Foundation
under grant number 2070. 相似文献