Knowledge and Information Systems - The rapid increase of available data in different complex contexts needs automatic tasks to manage and process contents. Semantic Web technologies represent the... 相似文献
As a preliminary overview, this work provides first a broad tutorial on the fluidization of discrete event dynamic models,
an efficient technique for dealing with the classical state explosion problem. Even if named as continuous or fluid, the relaxed models obtained are frequently hybrid in a technical sense. Thus,
there is plenty of room for using discrete, hybrid and continuous model techniques for logical verification, performance evaluation
and control studies. Moreover, the possibilities for transferring concepts and techniques from one modeling paradigm to others
are very significant, so there is much space for synergy. As a central modeling paradigm for parallel and synchronized discrete
event systems, Petri nets (PNs) are then considered in much more detail. In this sense, this paper is somewhat complementary
to David and Alla (2010). Our presentation of fluid views or approximations of PNs has sometimes a flavor of a survey, but also introduces some new
ideas or techniques. Among the aspects that distinguish the adopted approach are: the focus on the relationships between discrete and continuous PN models, both for untimed, i.e., fully non-deterministic abstractions, and timed versions; the use of structure theory of (discrete) PNs, algebraic and graph based concepts and results; and the bridge to Automatic Control Theory. After discussing
observability and controllability issues, the most technical part in this work, the paper concludes with some remarks and
possible directions for future research. 相似文献
The fractional Fourier transform (FrFT) is revisited in the framework of strongly continuous periodic semigroups to restate known results and to explore new properties of the FrFT. We then show how the FrFT can be used to reconstruct Magnetic Resonance (MR) images acquired under the presence of quadratic field inhomogeneity. Particularly, we prove that the order of the FrFT is a measure of the distortion in the reconstructed signal. Moreover, we give a dynamic interpretation to the order as time evolution of a function. We also introduce the notion of ρ-α space as an extension of the Fourier or k-space in MR, and we use it to study the distortions introduced in two common MR acquisition strategies. We formulate the reconstruction problem in the context of the FrFT and show how the semigroup theory allows us to find new reconstruction formulas for discrete sampled signals. Finally, the results are supplemented with numerical examples that show how it performs in a standard 1D MR signal reconstruction. 相似文献
Conflict detection is used in various scenarios ranging from interactive decision making (e.g., knowledge-based configuration) to the diagnosis of potentially faulty models (e.g., using knowledge base analysis operations). Conflicts can be regarded as sets of restrictions (constraints) causing an inconsistency. Junker’s QuickXPlain is a divide-and-conquer based algorithm for the detection of preferred minimal conflicts. In this article, we present a novel approach to the detection of such conflicts which is based on speculative programming. We introduce a parallelization of QuickXPlain and empirically evaluate this approach on the basis of synthesized knowledge bases representing feature models. The results of this evaluation show significant performance improvements in the parallelized QuickXPlain version.
This paper presents an experimental investigation of the following questions: how does the average-case complexity of random 3-SAT, understood as a function of the order (number of variables) for fixed density (ratio of number of clauses to order) instances, depend on the density? Is there a phase transition in which the complexity shifts from polynomial to exponential in the order? Is the transition dependent or independent of the solver? Our experiment design uses three complete SAT solvers embodying different algorithms: GRASP, CPLEX, and CUDD. We observe new phase transitions for all three solvers, where the median running time shifts from polynomial in the order to exponential. The location of the phase transition appears to be solver-dependent. GRASP shifts from polynomial to exponential complexity near the density of 3.8, CPLEX shifts near density 3, while CUDD exhibits this transition between densities of 0.1 and 0.5. This experimental result underscores the dependence between the solver and the complexity phase transition, and challenges the widely held belief that random 3-SAT exhibits a phase transition in computational complexity very close to the crossover point. 相似文献
Cloud Computing is a promising paradigm for parallel computing. However, as Cloud-based services become more dynamic, resource provisioning in Clouds becomes more challenging. The paradigm, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. In a Cloud, an appropriate number of Virtual Machines (VM) is created and allocated in physical resources for executing jobs. This work focuses on the Infrastructure as a Service (IaaS) model where custom VMs are launched in appropriate hosts available in a Cloud to execute scientific experiments coming from multiple users. Finding optimal solutions to allocate VMs to physical resources is an NP-complete problem, and therefore many heuristics have been developed. In this work, we describe and evaluate two Cloud schedulers based on Swarm Intelligence (SI) techniques, particularly Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO). The main performance metrics to study are the number of serviced users by the Cloud and the total number of created VMs in online (non-batch) scheduling scenarios. We also perform a sensitivity analysis by varying the specific-parameter values of each algorithm to evaluate the impact on the performance of the two objective metrics. The intra-Cloud network traffic is also measured. Simulated experiments performed using CloudSim and job data from real scientific problems show that the use of SI-based techniques succeeds in balancing the studied metrics compared to Genetic Algorithms. 相似文献
This paper develops the idea of min-max robust experiment design for dynamic system identification. The idea of min-max experiment design has been explored in the statistics literature. However, the technique is virtually unknown by the engineering community and, accordingly, there has been little prior work on examining its properties when applied to dynamic system identification. This paper initiates an exploration of these ideas. The paper considers linear systems with energy (or power) bounded inputs. We assume that the parameters lie in a given compact set and optimise the worst case over this set. We also provide a detailed analysis of the solution for an illustrative one parameter example and propose a convex optimisation algorithm that can be applied more generally to a discretised approximation to the design problem. We also examine the role played by different design criteria and present a simulation example illustrating the merits of the proposed approach. 相似文献
Particle tracking in turbulent flows in complex domains requires accurate interpolation of the fluid velocity field. If grids are non-orthogonal and curvilinear, the most accurate available interpolation methods fail. We propose an accurate interpolation scheme based on Taylor series expansion of the local fluid velocity about the grid point nearest to the desired location. The scheme is best suited for curvilinear grids with non-orthogonal computational cells. We present the scheme with second-order accuracy, yet the order of accuracy of the method can be adapted to that of the Navier-Stokes solver.An application to particle dispersion in a turbulent wavy channel is presented, for which the scheme is tested against standard linear interpolation. Results show that significant discrepancies can arise in the particle displacement produced by the two schemes, particularly in the near-wall region which is often discretized with highly-distorted computational cells. 相似文献