共查询到20条相似文献,搜索用时 31 毫秒
1.
José Espírito Santo 《Theory of Computing Systems》2009,45(4):963-994
In the context of intuitionistic implicational logic, we achieve a perfect correspondence (technically an isomorphism) between
sequent calculus and natural deduction, based on perfect correspondences between left-introduction and elimination, cut and
substitution, and cut-elimination and normalisation. This requires an enlarged system of natural deduction that refines von
Plato’s calculus. It is a calculus with modus ponens and primitive substitution; it is also a “coercion calculus”, in the sense of Cervesato and Pfenning. Both sequent calculus
and natural deduction are presented as typing systems for appropriate extensions of the λ-calculus. The whole difference between the two calculi is reduced to the associativity of applicative terms (sequent calculus
= right associative, natural deduction = left associative), and in fact the achieved isomorphism may be described as the mere
inversion of that associativity.
The novel natural deduction system is a “multiary” calculus, because “applicative terms” may exhibit a list of several arguments.
But the combination of “multiarity” and left-associativity seems simply wrong, leading necessarily to non-local reduction
rules (reason: normalisation, like cut-elimination, acts at the head of applicative terms, but natural deduction focuses at the tail of such terms). A solution is to extend natural deduction even further to a calculus that unifies sequent calculus and natural
deduction, based on the unification of cut and substitution. In the unified calculus, a sequent term behaves like in the sequent calculus, whereas the reduction steps of a natural deduction term are interleaved
with explicit steps for bringing heads to focus. A variant of the calculus has the symmetric role of improving sequent calculus
in dealing with tail-active permutative conversions. 相似文献
2.
W. M. P. van der Aalst V. Rubin H. M. W. Verbeek B. F. van Dongen E. Kindler C. W. Günther 《Software and Systems Modeling》2010,9(1):87-111
Process mining includes the automated discovery of processes from event logs. Based on observed events (e.g., activities being
executed or messages being exchanged) a process model is constructed. One of the essential problems in process mining is that
one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at
finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such “overfitting” by generalizing the model to allow for more behavior.
This generalization is often driven by the representation language and very crude assumptions about completeness. As a result,
parts of the model are “overfitting” (allow only for what has actually been observed) while other parts may be “underfitting”
(allow for much more behavior without strong support for it). None of the existing techniques enables the user to control
the balance between “overfitting” and “underfitting”. To address this, we propose a two-step approach. First, using a configurable
approach, a transition system is constructed. Then, using the “theory of regions”, the model is synthesized. The approach
has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches. 相似文献
3.
Scott D. Stoller 《Distributed Computing》2000,13(2):85-98
Summary. This paper proposes a framework for detecting global state predicates in systems of processes with approximately-synchronized
real-time clocks. Timestamps from these clocks are used to define two orderings on events: “definitely occurred before” and
“possibly occurred before”. These orderings lead naturally to definitions of 3 distinct detection modalities, i.e., 3 meanings of “predicate held during a computation”, namely: (“ possibly held”), (“ definitely held”), and (“ definitely held in a specific global state”). This paper defines these modalities and gives efficient algorithms for detecting
them. The algorithms are based on algorithms of Garg and Waldecker, Alagar and Venkatesan, Cooper and Marzullo, and Fromentin
and Raynal. Complexity analysis shows that under reasonable assumptions, these real-time-clock-based detection algorithms
are less expensive than detection algorithms based on Lamport's happened-before ordering. Sample applications are given to
illustrate the benefits of this approach.
Received: January 1999 / Accepted: November 1999 相似文献
4.
News production is characterised by complex and dynamic workflows as it is important to produce and distribute news as soon
as possible and in an audiovisual quality as good as possible. In this paper, we present news production as it has been implemented
at the Flemish Radio and Television (Vlaamse radio en televisie, VRT). Driven by the dynamic nature of news content, the VRT
news department is optimized for short cycle times and characterised by a highly parallel production process, i.e. product
engineering (news bulletin composition or “organise” and story-editing or “construct message”), various material procurement (“create media asset”) processes, mastering (“publish”), and the distribution processes largely run in parallel. The formal expression of news operations in terms of canonical
processes has allowed us to disambiguate the overall process, and it will help us developing meaningful and reusable interfaces. 相似文献
5.
We give an analysis of various classical axioms and characterize a notion of minimal classical logic that enforces Peirce’s
law without enforcing Ex Falso Quodlibet. We show that a “natural” implementation of this logic is Parigot’s classical natural
deduction. We then move on to the computational side and emphasize that Parigot’s λ
μ corresponds to minimal classical logic. A continuation constant must be added to λ
μ to get full classical logic. The extended calculus is isomorphic to a syntactical restriction of Felleisen’s theory of control
that offers a more expressive reduction semantics. This isomorphic calculus is in correspondence with a refined version of
Prawitz’s natural deduction.
This article is an extended version of the conference article “Minimal Classical Logic and Control Operators” (Ariola and
Herbelin, Lecture Notes in Computer Science, vol. 2719, pp. 871–885, 2003). A longer version is available as a technical report (Ariola
et al., Technical Report TR608, Indiana University, 2005).
Z.M. Ariola supported by National Science Foundation grant number CCR-0204389.
A. Sabry supported by National Science Foundation grant number CCR-0204389, by a Visiting Researcher position at Microsoft
Research, Cambridge, U.K., and by a Visiting Professor position at the University of Genova, Italy. 相似文献
6.
A.A. Razborov 《Computational Complexity》1998,7(4):291-324
We show that polynomial calculus proofs (sometimes also called Groebner proofs) of the pigeonhole principle must have degree at least over any field. This is the first non-trivial lower bound on the degree of polynomial calculus proofs obtained without using
unproved complexity assumptions. We also show that for some modifications of , expressible by polynomials of at most logarithmic degree, our bound can be improved to linear in the number of variables.
Finally, we show that for any Boolean function in n variables, every polynomial calculus proof of the statement “ cannot be computed by any circuit of size t,” must have degree . Loosely speaking, this means that low degree polynomial calculus proofs do not prove .
Received: January 15, 1997. 相似文献
7.
Shared counters are among the most basic coordination structures in distributed computing. Known implementations of shared
counters are either blocking, non-linearizable, or have a sequential bottleneck. We present the first counter algorithm that
is both linearizable, non-blocking, and can provably achieve high throughput in k-synchronous executions—executions in which process speeds vary by at most a constant factor k. The algorithm is based on a novel variation of the software combining paradigm that we call bounded-wait combining (BWC). It can thus be used to obtain implementations, possessing the same properties, of any object that supports combinable
operations, such as a stack or a queue. Unlike previous combining algorithms where processes may have to wait for each other
indefinitely, in the BWC algorithm, a process only waits for other processes for a bounded period of time and then “takes
destiny in its own hands”. In order to reason rigorously about the parallelism attainable by our algorithm, we define a novel
metric for measuring the throughput of shared objects, which we believe is interesting in its own right. We use this metric
to prove that our algorithm achieves throughput of Ω(N/ log N) in k-synchronous executions, where N is the number of processes that can participate in the algorithm. Our algorithm uses two tools that we believe may prove
useful for obtaining highly parallel non-blocking implementation of additional objects. The first are “synchronous locks”,
locks that are respected by processes only in k-synchronous executions and are disregarded otherwise; the second are “pseduo-transactions”—a weakening of regular transactions
that allows higher parallelism.
A preliminary version of this paper appeared in [11].
D. Hendler is supported in part by a grant from the Israel Science Foundation. S. Kutten is supported in part by a grant from
the Israel Science Foundation. 相似文献
8.
Genre analysis, the investigation of typified communicative actions arising in recurrent situations, has been developed to
study information use and interchange online, in businesses and in other organizations. As such, it holds out promise for
the investigation of similarly typified communicative actions and situations in CSCL contexts. This study explores this promise,
beginning with an overview of ways that genre analysis has been adapted and applied in related areas: in the study of group
behavior in organizations, and of evolving and proliferating communicative forms, actions, and situations on the Internet
(e-mails, blogs, FAQs, etc.). Focusing on the particular genre of the Internet “posting” in CSCL contexts, the paper hypothesizes
that the educational use of this genre bears recognizable similarities with its generic antecedent, the letter. In testing
this hypothesis, the paper describes a pilot case study of a set of CSCL postings (n = 136), which attempts to quantify the occurrence of rhetorical characteristics common to both the epistolary and CSCL “genres.”
This content analysis shows the recurrence in this sample of a range of rhetorical markers (240 in total) that are characteristic
of epistolary dynamics. It concludes by considering the implications of these findings and of a “genre approach” for CSCL
research generally, and for community of inquiry models in particular.
相似文献
Norm FriesenEmail: |
9.
Business alignment: using process mining as a tool for Delta analysis and conformance testing 总被引:1,自引:0,他引:1
W. M. P. van der Aalst 《Requirements Engineering》2005,10(3):198-211
Increasingly, business processes are being controlled and/or monitored by information systems. As a result, many business
processes leave their “footprints” in transactional information systems, i.e., business events are recorded in so-called event
logs. Process mining aims at improving this by providing techniques and tools for discovering process, control, data, organizational,
and social structures from event logs, i.e., the basic idea of process mining is to diagnose business processes by mining
event logs for knowledge. In this paper we focus on the potential use of process mining for measuring business alignment, i.e., comparing the real behavior of an information system or its users with the intended or expected behavior. We identify
two ways to create and/or maintain the fit between business processes and supporting information systems: Delta analysis and conformance testing. Delta analysis compares the discovered model (i.e., an abstraction derived from the actual process) with some predefined
processes model (e.g., the workflow model or reference model used to configure the system). Conformance testing attempts to
quantify the “fit” between the event log and some predefined processes model. In this paper, we show that Delta analysis and
conformance testing can be used to analyze business alignment as long as the actual events are logged and users have some
control over the process.
相似文献
W. M. P. van der AalstEmail: |
10.
The Method of Levels of Abstraction 总被引:2,自引:0,他引:2
Luciano Floridi 《Minds and Machines》2008,18(3):303-329
The use of “levels of abstraction” in philosophical analysis (levelism) has recently come under attack. In this paper, I argue that a refined version of epistemological levelism should be retained as a fundamental method, called the method of levels of abstraction. After a brief introduction, in section “Some Definitions and Preliminary Examples” the nature and applicability of the epistemological
method of levels of abstraction is clarified. In section “A Classic Application of the Method of Abstraction”, the philosophical
fruitfulness of the new method is shown by using Kant’s classic discussion of the “antinomies of pure reason” as an example.
In section “The Philosophy of the Method of Abstraction”, the method is further specified and supported by distinguishing
it from three other forms of “levelism”: (i) levels of organisation; (ii) levels of explanation and (iii) conceptual schemes.
In that context, the problems of relativism and antirealism are also briefly addressed. The conclusion discusses some of the
work that lies ahead, two potential limitations of the method and some results that have already been obtained by applying
the method to some long-standing philosophical problems.
相似文献
Luciano FloridiEmail: |
11.
Specifications and programs make much use of nondeterministic and/or partial expressions, i.e. expressions which may yield
several or no outcomes for some values of their free variables. Traditional 2-valued logics do not comfortably accommodate
reasoning about undefined expressions, and do not cater at all for nondeterministic expressions. We seek to rectify this with
a 4-valued typed logic E4 which classifies formulae as either “true”, “false”, “neither true nor false”, or “possibly true, possibly false”. The logic
is derived in part from the 2-valued logic E and the 3-valued LPF, and preserves most of the theorems of E. Indeed, the main result is that nondeterminacy can be added to a logic covering partiality at little cost.
Received July 1996 / Accepted in revised form April 1998 相似文献
12.
M. Bedenbecker R. Bandorf G. Bräuer H. Lüthje H. H. Gatzen 《Microsystem Technologies》2008,14(12):1949-1954
An important aspect of the development of electromagnetic microactuators is the search for suitable materials as well as the
development of the respective deposition and patterning processes. Within the Collaborative Research Center 516 “Design and
Fabrication of Active Microsystems”, it is the task of the subproject B1 “fabrication of magnetic thin films for electromagnetic
microactuators” to perform these investigations. The materials of interest can be divided into two groups: hard magnetic materials
and soft magnetic materials. Materials with optimized properties and fabrication processes have been developed within both
groups. An example is Samarium–Cobalt (SmCo), which can either be deposited using magnetron sputtering as Sm2Co17 with a very high energy product or in the SmCo5 phase using gas flow sputtering with very high deposition rates. In the area of soft magnetic materials, investigations on
Nickel-Iron (NiFe) especially NiFe81/19 were followed by the evaluation of NiFe45/55, which features a higher saturation flux
density B
s and relative permeability μ
r. Furthermore, current investigations focus on Cobalt-Iron (CoFe) and its further increased saturation flux density B
s and relative permeability μ
r. Current tasks include the stabilization of the fabrication processes to achieve good material properties (i.e. electroplating
of CoFe) or a shortening (e.g. by using heated substrates during deposition) by using process alternative not used so far.
Another topic is the integration into fabrication processes, i.e. the investigation of process stability and compatibility. 相似文献
13.
Changwook Kim 《Acta Informatica》2001,37(9):619-632
It is well known that every boundary (linear) eNCE graph language that is connected and degree-bounded by a constant is in
LOGCFL (NLOG). The present paper proves an upper bound of the complexity of boundary (linear) eNCE graph languages whose component
and degree bounds are functions in the size of the graph. As a corollary of this analysis, the known LOGCFL and NLOG results
stated above hold even if “connected” is replaced by “component-bounded by log n.”
Received: 15 January 1997 / 17 January 2001 相似文献
14.
The power of an object type T can be measured as the maximum number n of processes that can solve consensus using only objects of T and registers. This number, denoted cons(T), is called the consensus power of T. This paper addresses the question of the weakest failure detector to solve consensus among a number k > n of processes that communicate using shared objects of a type T with consensus power n. In other words, we seek for a failure detector that is sufficient and necessary to “boost” the consensus power of a type
T from n to k. It was shown in Neiger (Proceedings of the 14th annual ACM symposium on principles of distributed computing (PODC), pp.
100–109, 1995) that a certain failure detector, denoted Ω
n
, is sufficient to boost the power of a type T from n to k, and it was conjectured that Ω
n
was also necessary. In this paper, we prove this conjecture for one-shot deterministic types. We first show that, for any one-shot deterministic type T with cons(T) ≤ n, Ω
n
is necessary to boost the power of T from n to n + 1. Then we go a step further and show that Ω
n
is also the weakest to boost the power of (n + 1)-ported one-shot deterministic types from n to any k > n. Our result generalizes, in a precise sense, the result of the weakest failure detector to solve consensus in asynchronous
message-passing systems (Chandra et al. in J ACM 43(4):685–722, 1996). As a corollary, we show that Ω
t
is the weakest failure detector to boost the resilience level of a distributed shared memory system, i.e., to solve consensus among n > t processes using (t − 1)-resilient objects of consensus power t.
This paper is a revised and extended version of a paper that appeared in the Proceedings of the 17th International Symposium
on Distributed Computing (DISC 2003), entitled “On failure detectors and type boosters.” 相似文献
15.
Noam Livne 《Computational Complexity》2010,19(4):477-499
The theory of average case complexity studies the expected complexity of computational tasks under various specific distributions on the instances, rather than
their worst case complexity. Thus, this theory deals with distributional problems, defined as pairs each consisting of a decision problem and a probability distribution over the instances. While for applications
utilizing hardness, such as cryptography, one seeks an efficient algorithm that outputs random instances of some problem that are hard
for any algorithm with high probability, the resulting hard distributions in these cases are typically highly artificial,
and do not establish the hardness of the problem under “interesting” or “natural” distributions. This paper studies the possibility
of proving generic hardness results (i.e., for a wide class of NP{\mathcal{NP}} -complete problems), under “natural” distributions. Since it is not clear how to define a class of “natural” distributions
for general NP{\mathcal{NP}} -complete problems, one possibility is to impose some strong computational constraint on the distributions, with the intention
of this constraint being to force the distributions to “look natural”. Levin, in his seminal paper on average case complexity
from 1984, defined such a class of distributions, which he called P-computable distributions. He then showed that the NP{\mathcal{NP}} -complete Tiling problem, under some P-computable distribution, is hard for the complexity class of distributional NP{\mathcal{NP}} problems (i.e. NP{\mathcal{NP}} with P-computable distributions). However, since then very few NP{\mathcal{NP}}- complete problems (coupled with P-computable distributions), and in particular “natural” problems, were shown to be hard
in this sense. In this paper we show that all natural NP{\mathcal{NP}} -complete problems can be coupled with P-computable distributions such that the resulting distributional problem is hard
for distributional NP{\mathcal{NP}}. 相似文献
16.
We consider concurrent probabilistic systems, based on probabilistic automata of Segala & Lynch [55], which allow non-deterministic
choice between probability distributions. These systems can be decomposed into a collection of “computation trees” which arise
by resolving the non-deterministic, but not probabilistic, choices. The presence of non-determinism means that certain liveness
properties cannot be established unless fairness is assumed. We introduce a probabilistic branching time logic PBTL, based on the logic TPCTL of Hansson [30] and the logic PCTL of [55], resp. pCTL [14]. The formulas of the logic express properties such as “every request is eventually granted with probability at least
p”. We give three interpretations for PBTL on concurrent probabilistic processes: the first is standard, while in the remaining two interpretations the branching time
quantifiers are taken to range over a certain kind of fair computation trees. We then present a model checking algorithm for
verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints. We also propose adaptations of existing model checking algorithms for pCTL
[4, 14] to obtain procedures for PBTL
under fairness constraints. The techniques developed in this paper have applications in automatic verification of randomized
distributed systems.
Received: June 1997 / Accepted: May 1998 相似文献
17.
Ronen Shaltiel 《Computational Complexity》2011,20(1):87-143
A simple averaging argument shows that given a randomized algorithm A and a function f such that for every input x, Pr[A(x) = f(x)] ≥ 1 − ρ (where the probability is over the coin tosses of A), there exists a non-uniform deterministic algorithm B “of roughly the same complexity” such that Pr[B(x) = f(x)] ≥ 1 − ρ (where the probability is over a uniformly chosen input x). This implication is often referred to as “the easy direction of Yao’s lemma” and can be thought of as “weak derandomization”
in the sense that B is deterministic but only succeeds on most inputs. The implication follows as there exists a fixed value r′ for the random coins of A such that “hardwiring r′ into A” produces a deterministic algorithm B. However, this argument does not give a way to explicitly construct
B. 相似文献
18.
Lev Goldfarb 《Natural computing》2010,9(2):365-379
My main objective is to point out a fundamental weakness in the conventional conception of computation and suggest a promising way out. This weakness is directly related
to a gross underestimation of the role of object representation in a computational model, hence confining such models to an
unrealistic (input) environment, which, in turn, leads to “unnatural” computational models. This lack of appreciation of the
role of structural object representation has been inherited from logic and partly from mathematics, where, in the latter, the centuries-old tradition is to represent
objects as unstructured “points”. I also discuss why the appropriate fundamental reorientation in the conception of computational
models will bring the resulting study of computation closer to the “natural” computational constrains. An example of the pertinent,
class-oriented, representational formalism developed by our group over many years—Evolving Transformation System (ETS)—is
briefly outlined here, and several related general lines of research are suggested. 相似文献
19.
Mirela Damian Robin Flatland Joseph O’Rourke Suneeta Ramaswami 《Theory of Computing Systems》2010,47(3):674-695
We show that the space of polygonizations of a fixed planar point set S of n points is connected by O(n
2) “moves” between simple polygons. Each move is composed of a sequence of atomic moves called “stretches” and “twangs,” which
walk between weakly simple “polygonal wraps” of S. These moves show promise to serve as a basis for generating random polygons. 相似文献