Dynamic memory allocation has been used for decades. However, it has seldom been used in real-time systems since the worst
case of spatial and temporal requirements for allocation and deallocation operations is either unbounded or bounded but with
a very large bound.
In this paper, a new allocator called TLSF (Two Level Segregated Fit) is presented. TLSF is designed and implemented to accommodate
real-time constraints. The proposed allocator exhibits time-bounded behaviour, O(1), and maintains a very good execution time. This paper describes in detail the data structures and functions provided by
TLSF. We also compare TLSF with a representative set of allocators regarding their temporal cost and fragmentation.
Although the paper is mainly focused on timing analysis, a brief study and comparative analysis of fragmentation incurred
by the allocators has been also included in order to provide a global view of the behaviour of the allocators.
The temporal and spatial results showed that TLSF is also a fast allocator and produces a fragmentation close to that caused
by the best existing allocators.
This paper is concerned with the derivation of infinite schedules for timed automata that are in some sense optimal. To cover
a wide class of optimality criteria we start out by introducing an extension of the (priced) timed automata model that includes
both costs and rewards as separate modelling features. A precise definition is then given of what constitutes optimal infinite
behaviours for this class of models. We subsequently show that the derivation of optimal non-terminating schedules for such
double-priced timed automata is computable. This is done by a reduction of the problem to the determination of optimal mean-cycles
in finite graphs with weighted edges. This reduction is obtained by introducing the so-called corner-point abstraction, a
powerful abstraction technique of which we show that it preserves optimal schedules.
This work has been mostly done while visiting CISS at Aalborg University in Denmark and has been supported by CISS and by
ACI Cortos, a program of the French Ministry of Research. 相似文献
Initial algebra semantics is one of the cornerstones of the theory of modern functional programming languages. For each inductive
data type, it provides a Church encoding for that type, a build combinator which constructs data of that type, a fold combinator which encapsulates structured recursion over data of that type, and a fold/build rule which optimises modular programs by eliminating from them data constructed using the buildcombinator, and immediately consumed using the foldcombinator, for that type. It has long been thought that initial algebra semantics is not expressive enough to provide a similar
foundation for programming with nested types in Haskell. Specifically, the standard folds derived from initial algebra semantics have been considered too weak to capture commonly occurring patterns of recursion
over data of nested types in Haskell, and no build combinators or fold/build rules have until now been defined for nested types. This paper shows that standard folds are, in fact, sufficiently expressive for programming with nested types in Haskell. It also defines buildcombinators and fold/build fusion rules for nested types. It thus shows how initial algebra semantics provides a principled, expressive, and elegant
foundation for programming with nested types in Haskell. 相似文献
An iteration algorithm for the analysis of speckle interference patterns is presented. First, four digitized phase-shifted patterns are locally averaged. The phase information is then extracted by the usual phase shift algorithm. The wrapped phase is in turn used to reconstruct four new phase-shifted patterns. These three steps form a cycle. Repetition of the three steps has a great effect on suppressing speckle noise. Theoretical study shows that the iterated phase converges to a perfect result under ideal conditions. In general, the iteration causes little error but improves the phase information a great deal. The signal-to-noise ratio rises when additional iterations are performed. 相似文献
The interaction of Al-1 wt% Si with a W-Ti barrier layer in the Al/Ti3W7/SiO2/Si system was studied over the temperature range of 400–500 °C for reaction times up to 300 h. The interaction was found to be diffusion-controlled, and to occur in a layer-by-layer fashion. The first reaction product is always Al12W, which forms at the Al/Ti3W7 interface. With excess W in the system, Al will eventually be completely converted to Al12W, and further interactions result in the formation of an Al4W layer at the Al12W/Ti3W7 interface. The amount of Al4W increases at the expense of Al12W. Ti plays a minor role in the interaction and forms a small amount of Al3Ti precipitates in the Al12W matrix. Decomposition of the Ti3W7 pseudoalloy into W and Ti phases is not significant, and is not detected by X-ray diffraction even after annealing at 500 °C for 300 h. The kinetics of the Al12W formation follows a parabolic reaction law with an activation energy of 2.53 eV. The sheet resistance of the film is insensitive to compound formation as long as a continuous Al film exists in the system. The sheet resistance increases dramatically when Al is consumed to the extent that it is no longer a continuous film. The sheet resistance of the Al12W layer is estimated to be 570 m –1. 相似文献
Computer integrated manufacturing uses computer technology to integrate a manufacturing system through a man-machine interface that fills the gap between manual operation and machine processes. It is clear that a computer vision-based man-machine interface makes a fully automated system possible. The basic challenge of a vision-based interface is how to extract information from digitized images and convert it to machine-friendly knowledge. To extract information, then, it often end up to the problem of shape decomposition. This paper proposes an new approach in decomposing compound shapes without prior knowledge of the scene. The proposed algorithm exploits the fact that planar shapes can be completely described by contour segments, and can be decomposed at their maximum concavity into simpler objects. To reduce spurious decomposition, the decomposed segments are merged into groups by analyzing and utilizing the merging hypotheses. The algorithm calculates the linking possibility by weighting the angular differentiation between two segments. The techniques are implemented and are applied to other partial shape matching problems for clustering purposes. 相似文献
Development of artificial mechanoreceptors capable of sensing and pre-processing external mechanical stimuli is a crucial step toward constructing neuromorphic perception systems that can learn and store information. Here, bio-inspired artificial fast-adaptive (FA) and slow-adaptive (SA) mechanoreceptors with synapse-like functions are demonstrated for tactile perception. These mechanoreceptors integrate self-powered piezoelectric pressure sensors with synaptic electrolyte-gated field-effect transistors (EGFETs) featuring a reduced graphene oxide channel. The FA pressure sensor is based on a piezoelectric poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) thin film, while the SA pressure sensor is enabled by a piezoelectric ionogel with the piezoelectric-ionic coupling effect based on P(VDF-TrFE) and an ionic liquid. Changes in post-synaptic current are achieved through the synaptic effect of the EGFET by regulating the amplitude, number, duration, and frequency of tactile stimuli (pre-synaptic pulses). These devices have great potential to serve as artificial biological mechanoreceptors for future artificial neuromorphic perception systems. 相似文献
The network function virtualization (NFV) paradigm replaces hardware-dependent network functions by virtual network functions (VNFs) that can be deployed in commodity hardware, including legacy servers. Consequently, the use of NFV is expected to reduce operating and capital expenses, as well as improve service deployment operation and management flexibility. For many use cases, the VNFs must be visited and invoked following a specific order of execution in order to compose a complete network service, named service function chain (SFC). Nonetheless, despite the benefits from NFV and SFC virtualization technologies, their introduction must not harm network performance and service availability. On the one hand, redundancy is seen by network service planners as a mechanism well established to combat availability issues. At same time, there is a goal to optimize resource utilization in order to reduce operational expenditure. In this article, we share our experience in the design use of a framework, named SPIDER, focused on SFC placement that considers the network infrastructure condition and the required SFC availability to define the placement strategy. The SPIDER monitors the status of infrastructure nodes and links and defines which servers the VNFs should be placed on and the number of redundant replicas needed. We present a proof-of-concept of SPIDER using Kubernetes to launch the VNFs as containers. We also use Kubernetes to forward the traffic between the VNFs, composing the service chain. We perform experiments to evaluate the runtime of SPIDER and the SFC delay under different network conditions. 相似文献
Action recognition based on a human skeleton is an extremely challenging research problem. The temporal information contained in the human skeleton is more difficult to extract than the spatial information. Many researchers focus on graph convolution networks and apply them to action recognition. In this study, an action recognition method based on a two-stream network called RNXt-GCN is proposed on the basis of the Spatial-Temporal Graph Convolutional Network (ST-GCN). The human skeleton is converted first into a spatial-temporal graph and a SkeleMotion image which are input into ST-GCN and ResNeXt, respectively, for performing the spatial-temporal convolution. The convolved features are then fused. The proposed method models the temporal information in action from the amplitude and direction of the action and addresses the shortcomings of isolated temporal information in the ST-GCN. The experiments are comprehensively performed on the four datasets: 1) UTD-MHAD, 2) Northwestern-UCLA, 3) NTU RGB-D 60, and 4) NTU RGB-D 120. The proposed model shows very competitive results compared with other models in our experiments. On the experiments of NTU RGB?+?D 120 dataset, our proposed model outperforms those of the state-of-the-art two-stream models.