共查询到20条相似文献,搜索用时 15 毫秒
1.
Using string kernels, languages can be represented as hyperplanes in a high dimensional feature space. We discuss the language-theoretic
properties of this formalism with particular reference to the implicit feature maps defined by string kernels, considering
the expressive power of the formalism, its closure properties and its relationship to other formalisms. We present a new family
of grammatical inference algorithms based on this idea. We demonstrate that some mildly context-sensitive languages can be
represented in this way and that it is possible to efficiently learn these using kernel PCA. We experimentally demonstrate
the effectiveness of this approach on some standard examples of context-sensitive languages using small synthetic data sets. 相似文献
2.
Structure of Weakly Invertible Semi-Input-Memory Finite Automata with Delay 1 总被引:4,自引:2,他引:4 下载免费PDF全文
Semi-input-memory finite automata,a kind of finite automata introduced by the first author of this paper for studying error propagation ,are a generalization of inputmemory finite automata ,by appending an autonomous finite automation component .In this paper,we give a characterization of the structure of weakly invertible semi-input-memory finite automata with delay 1,in which the state graph of each autonomous finite automation is cycle,From a result on mutual invertibility of finite automata obtained by th authors recently,it leads to a characerization of the structure of feedfoward inverse finite automata with delay 1. 相似文献
3.
M. E. Stepantsov 《Mathematical Models and Computer Simulations》2018,10(2):249-254
This paper presents a modification of the stochastic cellular automaton-based version of the “power–society–economics” model that describes the dynamics of power distribution in a hierarchy taking into account social and economic processes and corruption in the system of power. This approach allows the model to incorporate a number of new factors, including transport communications among municipalities and regions. The basic principles of the model are presented, a simulation system is constructed, and a number of numeric experiments unimplementable with a continuous deterministic model are conducted. Some new results are obtained, including the relationship between the power dynamics and corruption level, as well as the influence of the transport network on the dynamics of the system. 相似文献
4.
5.
In this paper we deal with the problem of estimating the marking of a labeled Petri net with nondeterministic transitions. In particular, we consider the case in which nondeterminism is due to the presence of transitions that share the same label and that can be simultaneously enabled. Under the assumption that: the structure of the net is known, the initial marking is known, the transition labels can be observed, the nondeterministic transitions are contact-free, we present a technique for characterizing the set of markings that are consistent with the actual observation. More precisely, we show that the set of markings consistent with an observed word can be represented by a linear system with a fixed structure that does not depend on the length of the observed word.*Contact author is Alessandro Giua. 相似文献
6.
The convergence to steady state solutions of the Euler equations for high order weighted essentially non-oscillatory (WENO)
finite difference schemes with the Lax-Friedrichs flux splitting (Jiang and Shu, in J. Comput. Phys. 126:202–228, 1996) is investigated. Numerical evidence in Zhang and Shu (J. Sci. Comput. 31:273–305, 2007) indicates that there exist slight post-shock oscillations when we use high order WENO schemes to solve problems containing
shock waves. Even though these oscillations are small in their magnitude and do not affect the “essentially non-oscillatory”
property of the WENO schemes, they are indeed responsible for the numerical residue to hang at the truncation error level
of the scheme instead of settling down to machine zero. Differently from the strategy adopted in Zhang and Shu (J. Sci. Comput.
31:273–305, 2007), in which a new smoothness indicator was introduced to facilitate convergence to steady states, in this paper we study the
effect of the local characteristic decomposition on steady state convergence. Numerical tests indicate that the slight post-shock
oscillation has a close relationship with the local characteristic decomposition process. When this process is based on an
average Jacobian at the cell interface using the Roe average, as is the standard procedure for WENO schemes, such post-shock
oscillation appears. If we instead use upwind-biased interpolation to approximate the physical variables including the velocity
and enthalpy on the cell interface to compute the left and right eigenvectors of the Jacobian for the local characteristic
decomposition, the slight post-shock oscillation can be removed or reduced significantly and the numerical residue settles
down to lower values than other WENO schemes and can reach machine zero for many test cases. This new procedure is also effective
for higher order WENO schemes and for WENO schemes with different smoothness indicators. 相似文献
7.
The reconstruction of geometry or, in particular, the shape of objects is a common issue in image analysis. Starting from
a variational formulation of such a problem on a shape manifold we introduce a regularization technique incorporating statistical
shape knowledge. The key idea is to consider a Riemannian metric on the shape manifold which reflects the statistics of a
given training set. We investigate the properties of the regularization functional and illustrate our technique by applying
it to region-based and edge-based segmentation of image data. In contrast to previous works our framework can be considered
on arbitrary (finite-dimensional) shape manifolds and allows the use of Riemannian metrics for regularization of a wide class
of variational problems in image processing. 相似文献
8.
Monica Hernandez Matias N. Bossa Salvador Olmos 《International Journal of Computer Vision》2009,85(3):291-306
Computational Anatomy aims for the study of variability in anatomical structures from images. Variability is encoded by the
spatial transformations existing between anatomical images and a template selected as reference. In the absence of a more
justified model for inter-subject variability, transformations are considered to belong to a convenient family of diffeomorphisms
which provides a suitable mathematical setting for the analysis of anatomical variability. One of the proposed paradigms for
diffeomorphic registration is the Large Deformation Diffeomorphic Metric Mapping (LDDMM). In this framework, transformations
are characterized as end points of paths parameterized by time-varying flows of vector fields defined on the tangent space
of a Riemannian manifold of diffeomorphisms and computed from the solution of the non-stationary transport equation associated
to these flows. With this characterization, optimization in LDDMM is performed on the space of non-stationary vector field
flows resulting into a time and memory consuming algorithm. Recently, an alternative characterization of paths of diffeomorphisms
based on constant-time flows of vector fields has been proposed in the literature. With this parameterization, diffeomorphisms
constitute solutions of stationary ODEs. In this article, the stationary parameterization is included for diffeomorphic registration
in the LDDMM framework. We formulate the variational problem related to this registration scenario and derive the associated
Euler-Lagrange equations. Moreover, the performance of the non-stationary vs the stationary parameterizations in real and
simulated 3D-MRI brain datasets is evaluated. Compared to the non-stationary parameterization, our proposal provides similar
results in terms of image matching and local differences between the diffeomorphic transformations while drastically reducing
memory and time requirements. 相似文献
9.
Giovanni Bellettini Valentina Beorchia Maurizio Paolini 《Journal of Mathematical Imaging and Vision》2008,32(3):265-291
We introduce and study a two-dimensional variational model for the reconstruction of a smooth generic solid shape E, which may handle the self-occlusions and that can be considered as an improvement of the 2.1D sketch of Nitzberg and Mumford
(Proceedings of the Third International Conference on Computer Vision, Osaka, 1990). We characterize from the topological viewpoint the apparent contour of E, namely, we characterize those planar graphs that are apparent contours of some shape E. This is the classical problem of recovering a three-dimensional layered shape from its apparent contour, which is of interest
in theoretical computer vision. We make use of the so-called Huffman labeling (Machine Intelligence, vol. 6, Am. Elsevier,
New York, 1971), see also the papers of Williams (Ph.D. Dissertation, 1994 and Int. J. Comput. Vis. 23:93–108, 1997) and the paper of Karpenko and Hughes (Preprint, 2006) for related results. Moreover, we show that if E and F are two shapes having the same apparent contour, then E and F differ by a global homeomorphism which is strictly increasing on each fiber along the direction of the eye of the observer.
These two topological theorems allow to find the domain of the functional ℱ describing the model. Compactness, semicontinuity
and relaxation properties of ℱ are then studied, as well as connections of our model with the problem of completion of hidden
contours.
相似文献
Maurizio PaoliniEmail: |
10.
Known algorithms capable of scheduling implicit-deadline sporadic tasks over identical processors at up to 100% utilisation
invariably involve numerous preemptions and migrations. To the challenge of devising a scheduling scheme with as few preemptions
and migrations as possible, for a given guaranteed utilisation bound, we respond with the algorithm NPS-F. It is configurable
with a parameter, trading off guaranteed schedulable utilisation (up to 100%) vs preemptions. For any possible configuration,
NPS-F introduces fewer preemptions than any other known algorithm matching its utilisation bound. 相似文献
11.
On the Structure of Finite Automata of Which M Is an(Weak)Inverse with Delay τ 总被引:1,自引:1,他引:0 下载免费PDF全文
Chen Shihua 《计算机科学技术学报》1986,1(2):54-59
In this paper,we first give a method that for any inverse finite automaton M' withdelay τ,all inver tible finite automata with delay τ,of which M' is an inverse with delayτ,can be constructed;and a universal nondeterministic finite automaton,for all finiteautomata of which M' is an inverse with delay τ,can also be constructed.We then give amethod that for any weak inverse finite automaton M' with delay τ,all weaklyinvertible finite automata with delay τ of which M' is a weak inverse with delay,can beconstructed;and a universal nondeterministic finite automaton,for all finiteautomata of which M' is a weak inverse with delay τ,can also be constructed. 相似文献
12.
In this paper, the Minimum Polynomial Extrapolation method (MPE) is used to accelerate the convergence of the Characteristic–Based–Split
(CBS) scheme for the numerical solution of steady state incompressible flows with heat transfer. The CBS scheme is a fractional
step method for the solution of the Navier–Stokes equations while the MPE method is a vector extrapolation method which transforms
the original sequence into another sequence converging to the same limit faster then the original one without the explicit
knowledge of the sequence generator. The developed algorithm is tested on a two-dimensional benchmark problem (buoyancy–driven
convection problem) where the Navier–Stokes equations are coupled with the temperature equation. The obtained results show
the feature of the extrapolation procedure to the CBS scheme and the reduction of the computational time of the simulation. 相似文献
13.
This paper presents a methodology for treating energy consistency when considering simultaneous impacts and contacts with
friction in the simulation of systems of interconnected bodies. Hard impact and contact is considered where deformation of
the impacting surfaces is negligible. The proposed approach uses a discrete algebraic model of impact in conjunction with
moment and tangential coefficients of restitution (CORs) to develop a general impact law for determining post-impact velocities.
This process depends on impulse–momentum theory, the complementarity conditions, a principle of maximum dissipation, and the
determination of contact forces and post-impact accelerations. The proposed methodology also uses an energy-modifying COR
to directly control the system’s energy profile over time. The key result is that different energy profiles yield different
results and thus energy consistency should be considered carefully in the development of dynamic simulations. The approach
is illustrated on a double pendulum, considered to be a benchmark case, and a bicycle structure. 相似文献
14.
This work treats the problem of modelling multibody systems with structural flexibility. By combining linear graph theory
with the principle of virtual work and finite elements, a dynamic formulation is obtained that extends graph-theoretic (GT)
modelling methods to the analysis of thin flexible plates for multibody systems. The system is represented by a linear graph,
in which nodes represent reference frames on flexible plates, and edges represent components that connect these frames. To
generate the equations of motion with elastic deformations, the flexible plates are discretized using a triangular thin shell
finite element based on the discrete Kirchhoff criterion and can be used to discretize bidirectional bodies such as satellite panels, flatbed trailers, and mechanisms with
plates. Three flexible systems with plates are analyzed to illustrate the performance of this new variational graph-theoretic
formulation and its ability to generate directly a set of motion equations for flexible multibody systems (FMS) without additional
user input. 相似文献
15.
In this paper, we propose a method for the verification of timed properties for real-time systems featuring a preemptive scheduling
policy: the system, modeled as a scheduling time Petri net, is first translated into a linear hybrid automaton to which it
is time-bisimilar. Timed properties can then be verified using HyTech. The efficiency of this approach leans on two major points: first, the translation features a minimization of the number
of variables (clocks) of the resulting automaton, which is a critical parameter for the efficiency of the ensuing verification.
Second, the translation is performed by an over-approximating algorithm, which is based on Difference Bound Matrix and therefore
efficient, that nonetheless produces a time-bisimilar automaton despite the over-approximation. The proposed modeling and
verification method are generic enough to account for many scheduling policies. In this paper, we specifically show how to
deal with Fixed Priority and Earliest Deadline First policies, with the possibility of using Round-Robin for tasks with the
same priority. We have implemented the method and give some experimental results illustrating its efficiency.
相似文献
Olivier (H. ) RouxEmail: |
16.
The potential flow equations which govern the free-surface motion of an ideal fluid (the water wave problem) are notoriously
difficult to solve for a number of reasons. First, they are a classical free-boundary problem where the domain shape is one
of the unknowns to be found. Additionally, they are strongly nonlinear (with derivatives appearing in the nonlinearity) without
a natural dissipation mechanism so that spurious high-frequency modes are not damped. In this contribution we address the
latter of these difficulties using a surface formulation (which addresses the former complication) supplemented with physically-motivated
viscous effects recently derived by Dias et al. (Phys. Lett. A 372:1297–1302, 2008). The novelty of our approach is to derive a weakly nonlinear model from the surface formulation of Zakharov (J. Appl. Mech.
Tech. Phys. 9:190–194, 1968) and Craig and Sulem (J. Comput. Phys. 108:73–83, 1993), complemented with the viscous effects mentioned above. Our new model is simple to implement while being both faithful to
the physics of the problem and extremely stable numerically. 相似文献
17.
In this paper numerical evaluation of infinite Bessel transforms with high frequency is considered. We first derive an asymptotic formula by using the integration by parts. Next we use an interpolation formula to evaluate the infinite Bessel transforms by choosing the suitable basis and nodes. The corresponding error results are proved, and numerical examples are shown to illustrate the efficiency and accuracy of the presented formulae. 相似文献
18.
19.
Earlier approximate response time analysis (RTA) methods for tasks with offsets (transactional task model) exhibit two major
deficiencies: (i) They overestimate the calculated response times resulting in an overly pessimistic result. (ii) They suffer
from time complexity problems resulting in an RTA method that may not be applicable in practice. This paper shows how these
two problems can be alleviated and combined in one single fast-and-tight RTA method that combines the best of worlds, high
precision response times and a fast approximate RTA method.
Simulation studies, on randomly generated task sets, show that the response time improvement is significant, typically about 15%
tighter response times in 50% of the cases, resulting in about 12% higher admission probability for low priority tasks subjected
to admission control. Simulation studies also show that speedups of more than two orders of magnitude, for realistically sized
tasks sets, compared to earlier RTA analysis techniques, can be obtained.
Other improvements such as Palencia Gutiérrez, González Harbour (Proceedings of the 20th IEEE real-time systems symposium
(RTSS), pp. 328–339, 1999), Redell (Technical Report TRITA-MMK 2003:4, Dept. of Machine Design, KTH, 2003) are orthogonal and complementary which means that our method can easily be incorporated also in those methods. Hence, we
conclude that the fast-and-tight RTA method presented is the preferred analysis technique when tight response-time estimates
are needed, and that we do not need to sacrifice precision for analysis speed; both are obtained with one single method.
相似文献
Mikael NolinEmail: |
20.
This paper presents the development of the planar bipedal robot ERNIE as well as numerical and experimental studies of the
influence of parallel knee joint compliance on the energetic efficiency of walking in ERNIE. ERNIE has 5 links—a torso, two
femurs and two tibias—and is configured to walk on a treadmill so that it can walk indefinitely in a confined space. Springs
can be attached across the knee joints in parallel with the knee actuators. The hybrid zero dynamics framework serves as the
basis for control of ERNIE’s walking. In the investigation of the effects of compliance on the energetic efficiency of walking,
four cases were studied: one without springs and three with springs of different stiffnesses and preloads. It was found that
for low-speed walking, the addition of soft springs may be used to increase energetic efficiency, while stiffer springs decrease
the energetic efficiency. For high-speed walking, the addition of either soft or stiff springs increases the energetic efficiency
of walking, while stiffer springs improve the energetic efficiency more than do softer springs.
Electronic Supplementary Material The online version of this article () contains supplementary material, which is available to authorized users.
相似文献
R. A. BockbraderEmail: |