共查询到20条相似文献,搜索用时 32 毫秒
1.
In this paper we provide the full spectral decomposition of the Multi-Class Lighthill Whitham Richards (MCLWR) traffic models
described in (Wong et al. in Transp. Res. Part A 36:827–841, 2002; Benzoni-Gavage and Colombo in Eur. J. Appl. Math. 14:587–612, 2003). Even though the eigenvalues of these models can only be found numerically, the knowledge of the spectral structure allows
the use of characteristic-based High Resolution Shock Capturing (HRSC) schemes. We compare the characteristic-based approach
to the component-wise schemes used in (Zhang et al. in J. Comput. Phys. 191:639–659, 2003), and propose two strategies to minimize the oscillatory behavior that can be observed when using the component-wise approach. 相似文献
2.
This article is intended as a preliminary report on the implementation of a finite volume multilevel scheme for the discretization
of the incompressible Navier-Stokes equations. As is well known the use of staggered grids (e.g. MAC grids, Perićet al. Comput. Fluids,16(4), 389–403, (1988)) is a serious impediment for the implementation of multilevel schemes in the context of finite differences.
This difficulty is circumvented here by the use of a colocated finite volume discretization (Faureet al. (2004a) Submitted, Perićet al. Comput. Fluids,16(4), 389–403, (1988)), for which the algebra of multilevel methods is much simpler than in the context of MAC type finite
differences. The general ideas and the numerical simulations are presented in this article in the simplified context of a
two-dimensional Burgers equations; the two-, and three-dimensional Navier-Stokes equations introducing new difficulties related
to the incompressibility condition and the time discretization, will be considered elsewhere (see Faureet al. (2004a) Submitted and Faureet al. (2004b), in preparation). 相似文献
3.
Jesper Andersen Ebbe Elsborg Fritz Henglein Jakob Grue Simonsen Christian Stefansen 《International Journal on Software Tools for Technology Transfer (STTT)》2006,8(6):485-516
We present a declarative language for compositional specification of contracts governing the exchange of resources. It extends
Eber and Peyton Jones’s declarative language for specifying financial contracts (Jones et al. in The Fun of Programming. 2003)
to the exchange of money, goods and services amongst multiple parties and complements McCarthy’s Resources, Events and Agents
(REA) accounting model (McCarthy in Account Rev. LVII(3), 554–578, 1982) with a view- independent formal contract model that supports definition of user-defined contracts, automatic
monitoring under execution and user-definable analysis of their state before, during and after execution. We provide several
realistic examples of commercial contracts and their analyses. A variety of (real) contracts can be expressed in such a fashion
as to support their integration, management and analysis in an operational environment that registers events. The language
design is driven by both domain considerations and semantic language design methods: a contract denotes a set of traces of
events, each of which is an alternative way of concluding the contract successfully, which gives rise to a CSP-style (Brooker
et al. in J.ACM 31(3), 560–599, 1984; Hoare in Communicating Sequential Processes, 1985) denotational semantics. The denotational semantics
drives the development of a sound and complete small-step operational semantics, where a partially executed contract is represented
as a (full) contract that represents the remaining contractual commitments. This operational semantics is then systematically
refined in two stages to an instrumented operational semantics that reflects the bookkeeping practice of identifying the specific
contractual commitment a particular event matches at the time the event occurs, as opposed to delaying this matching until
the contract is concluded. 相似文献
4.
Christophe Vandekerckhove Ioannis Kevrekidis Dirk Roose 《Journal of scientific computing》2009,39(2):167-188
The long-term dynamic behavior of many dynamical systems evolves on a low-dimensional, attracting, invariant slow manifold,
which can be parameterized by only a few variables (“observables”). The explicit derivation of such a slow manifold (and thus,
the reduction of the long-term system dynamics) is often extremely difficult or practically impossible. For this class of
problems, the equation-free framework has been developed to enable performing coarse-grained computations, based on short
full model simulations. Each full model simulation should be initialized so that the full model state is consistent with the
values of the observables and close to the slow manifold. To compute such an initial full model state, a class of constrained
runs functional iterations was proposed (Gear and Kevrekidis, J. Sci. Comput. 25(1), 17–28, 2005; Gear et al., SIAM J. Appl. Dyn. Syst. 4(3), 711–732, 2005). The schemes in this class only use the full model simulator and converge, under certain conditions, to an approximation
of the desired state on the slow manifold. In this article, we develop an implementation of the constrained runs scheme that
is based on a (preconditioned) Newton-Krylov method rather than on a simple functional iteration. The functional iteration
and the Newton-Krylov method are compared in detail using a lattice Boltzmann model for one-dimensional reaction-diffusion
as the full model simulator. Depending on the parameters of the lattice Boltzmann model, the functional iteration may converge
slowly or even diverge. We show that both issues are largely resolved by using the Newton-Krylov method, especially when a
coarse grid correction preconditioner is incorporated. 相似文献
5.
Fuzzy regression using least absolute deviation estimators 总被引:1,自引:1,他引:0
Seung Hoe Choi James J. Buckley 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2008,12(3):257-263
In fuzzy regression, that was first proposed by Tanaka et al. (Eur J Oper Res 40:389–396, 1989; Int Cong Appl Syst Cybern
4:2933–2938, 1980; IEEE Trans SystMan Cybern 12:903–907, 1982), there is a tendency that the greater the values of independent
variables, the wider the width of the estimated dependent variables. This causes a decrease in the accuracy of the fuzzy regression
model constructed by the least squares method.
This paper suggests the least absolute deviation estimators to construct the fuzzy regression model, and investigates the
performance of the fuzzy regression models with respect to a certain errormeasure. Simulation studies and examples show that
the proposed model produces less error than the fuzzy regression model studied by many authors that use the least squares
method when the data contains fuzzy outliers. 相似文献
6.
In this paper we present an extension of the reservoir technique (see, [Alouges et al., Submitted; Alouges et al.(2002a),
In: Finite volumes for complex applications, III, pp. 247–254, Marseille; Alouges et al.(2002b), C. R. Math. Acad. Sci. Paris, 335(7), 627–632.]) for two-dimensional advection equations with non-constant velocities. The purpose of this work is to make
decrease the numerical diffusion of finite volume schemes, correcting the numerical directions of propagation, using a so-called
corrector vector combined with the reservoirs. We then introduce an object called velocities rose in order to minimize the algorithmic complexity of this method. 相似文献
7.
A Growing and Pruning Radial Basis Function (GAP-RBF) network has been recently proposed by Huang et al. [IEEE Transactions on Systems, Man and Cybernetics-Part B: Cybernetics, 34(6) (2004), 2284–2292]. However, its performance in signal processing areas is not clear yet. In this paper, GAP-RBF network
is used for solving the communication channel equalization problem. The simulation results demonstrate that GAP-RBF equalizer
outperforms other equalizers such as recurrent neural network and MRAN on linear and nonlinear channel model in terms of bit
error rate. 相似文献
8.
Francesca Levi 《Acta Informatica》2006,42(6-7):429-500
We present: (i) an encoding of Boxed Ambients into a variant of Safe Ambients; and (ii) a new type system for multi-level security of Safe Ambients in the style of Cardelli et al. (Information and Computation 177(2), 160–194 (2002)) and Dezani-Ciancaglini and Salvo (Security types for mobile safe ambients. In: Proceedings of ASIAN '00, LNCS 1961, pp. 215–236. Springer Verlag (2000)). Then, we show that the types, when applied to the encoded BA proceses, permits to accurately verify Mandatory Access Control policies of the source processes. 相似文献
9.
We present a posteriori-residual analysis for the approximate time-dependent Stokes model Chorin-Temam projection scheme (Chorin
in Math. Comput. 23:341–353, 1969; Temam in Arch. Ration. Mech. Appl. 33:377–385, 1969). Based on the multi-step approach introduced in Bergam et al. (Math. Comput. 74(251):1117–1138, 2004), we derive error estimators, with respect to both time and space approximations, related to diffusive and incompressible
parts of Stokes equations. Using a conforming finite element discretization, we prove the equivalence between error and estimators
under specific conditions. 相似文献
10.
This article is intended as a preliminary report on the implementation of a finite volume multilevel scheme for the discretization
of the incompressible Navier–Stokes equations. As is well known the use of staggered grids (e.g. MAC grids, Perić et al. Comput. Fluids, 16(4), 389–403, (1988)) is a serious impediment for the implementation of multilevel schemes in the context of finite differences.
This difficulty is circumvented here by the use of a colocated finite volume discretization (Faure et al. (2004a) Submitted, Perić et al. Comput. Fluids, 16(4), 389–403, (1988)), for which the algebra of multilevel methods is much simpler than in the context of MAC type finite
differences. The general ideas and the numerical simulations are presented in this article in the simplified context of a
two-dimensional Burgers equations; the two-, and three-dimensional Navier–Stokes equations introducing new difficulties related
to the incompressibility condition and the time discretization, will be considered elsewhere (see Faure et al. (2004a) Submitted and Faure et al. (2004b), in preparation). 相似文献
11.
In a recent paper, van Binsbergen and Brandt (Computational Economics, 29, 355–367, 2007), using the method of Brandt et al. (Review of Financial Studies, 18, 831–873, 2005), argue, in the context of a portfolio choice problem with CRRA preferences, that value function iteration
(VFI) is inferior to portfolio weight iteration (PWI), when a Taylor approximation is used. In particular, they report that
the value function iteration produces highly inaccurate solutions when risk aversion is high and the investment horizon long.
We argue that the reason for the deterioration of VFI is the high nonlinearity of the value function and illustrate that if
one uses a natural and economically-motivated transformation of the value function, namely the certainty equivalent, the VFI approach produces very accurate results. 相似文献
12.
Brad C. Davis P. Thomas Fletcher Elizabeth Bullitt Sarang Joshi 《International Journal of Computer Vision》2010,90(2):255-266
Regression analysis is a powerful tool for the study of changes in a dependent variable as a function of an independent regressor
variable, and in particular it is applicable to the study of anatomical growth and shape change. When the underlying process
can be modeled by parameters in a Euclidean space, classical regression techniques (Hardle, Applied Nonparametric Regression,
1990; Wand and Jones, Kernel Smoothing, 1995) are applicable and have been studied extensively. However, recent work suggests that attempts to describe anatomical shapes
using flat Euclidean spaces undermines our ability to represent natural biological variability (Fletcher et al., IEEE Trans. Med. Imaging 23(8), 995–1005,
2004; Grenander and Miller, Q. Appl. Math. 56(4), 617–694, 1998). 相似文献
13.
Computing the duplication history of a tandem repeated region is an important problem in computational biology (Fitch in Genetics
86:623–644, 1977; Jaitly et al. in J. Comput. Syst. Sci. 65:494–507, 2002; Tang et al. in J. Comput. Biol. 9:429–446, 2002). In this paper, we design a polynomial-time approximation scheme (PTAS) for the case where the size of the duplication block
is 1. Our PTAS is faster than the previously best PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002). For example, to achieve a ratio of 1.5, our PTAS takes O(n
5) time while the PTAS in Jaitly et al. (J. Comput. Syst. Sci. 65:494–507, 2002) takes O(n
11) time. We also design a ratio-6 polynomial-time approximation algorithm for the case where the size of each duplication block
is at most 2. This is the first polynomial-time approximation algorithm with a guaranteed ratio for this case.
Part of work was done during a Z.-Z. Chen visit at City University of Hong Kong. 相似文献
14.
Constantin Popescu 《Electronic Commerce Research》2018,18(2):339-358
In this paper we present a secure and efficient transaction protocol that provides the anonymity and can detect the double spending. The proposed payment system is based on the ElGamal encryption scheme, the ElGamal signature scheme and the ElGamal blind signature protocol. We show that our transaction protocol is secure and efficient. We give the definitions of unlinkability and unforgeability of our security model and we prove that the proposed transaction protocol is unforgeable and satisfies the unlinkability property. We show that the proposed system is more efficient, in terms of the computation and communication cost, than the compared payment systems (Eslami et al. in Electron Commer Res Appl 10:59–66, 2011; Chen et al. in Electron Commer Res Appl 10:279–287, 2011; Liu et al. in Proceedings of second European PKI workshop: research and applications. Lecture notes in computer science, vol 3545, pp 206–214, 2005 and Chen et al. in Electron Commer Res Appl 10:673–682, 2011) for a customer who withdraws and spends an e-coin and for the merchant who verifies an electronic coin. Also, the proposed e-cash system is useful for the electronic transactions when the connection between the bank and the merchant is not available during the payment protocol. This means a less bandwidth of the payment protocol and then increases the speed of the electronic transaction. 相似文献
15.
Kajal T. Claypool Elke A. Rundensteiner 《The VLDB Journal The International Journal on Very Large Data Bases》2005,14(3):300-317
The traditional model of bench (wet) chemistry in many life sciences domain is today actively complimented by computer-based
discoveries utilizing the growing number of online data sources. A typical computer-based discovery scenario for many life scientists includes the creation of local caches of pertinent information from multiple online resources
such as Swissprot [Nucleic Acid Res. 1(28), 45–48 (2000)], PIR [Nucleic Acids Res. 28(1), 41–44 (2000)], PDB [The Protein DataBank. Wiley, New York (2003)], to enable efficient data analysis. This local caching
of data, however, exposes their research and eventual results to the problems of data staleness, that is, cached data may
quickly be obsolete or incorrect, dependent on the updates that are made to the source data. This represents a significant
challenge to the scientific community, forcing scientists to be continuously aware of the frequent changes made to public
data sources, and more importantly aware of the potential effects on their own derived data sets during the course of their
research. To address this significant challenge, in this paper we present an approach for handling update propagation between
heterogeneous databases, guaranteeing data freshness for scientists irrespective of their choice of data source and its underlying
data model or interface. We propose a middle-layer–based solution wherein first the change in the online data source is translated to a sequence of changes in the middle-layer;
next each change in the middle-layer is propagated through an algebraic representation of the translation between the source
and the target; and finally the net-change is translated to a set of changes that are then applied to the local cache. In
this paper, we present our algebraic model that represents the mapping of the online resource to the local cache, as well
as our adaptive propagation algorithm that can incrementally propagate both schema and data changes from the source to the
cache in a data model independent manner. We present a case study based on a joint ongoing project with our collaborators
in the Chemistry Department at UMass-Lowell to explicate our approach. 相似文献
16.
Lakshmi Manasa Shankara Narayanan Krishna Chinmay Jain 《Theory of Computing Systems》2011,48(3):648-679
Weighted timed automata (WTA), introduced in Alur et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 49–62, Springer, Berlin,
2001), Behrmann et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 147–161, Springer, Berlin, 2001) are an extension of Alur and Dill (Theor. Comput. Sci. 126(2):183–235, 1994) timed automata, a widely accepted formalism for the modelling and verification of real time systems. Weighted timed automata
extend timed automata by allowing costs on the locations and edges. There has been a lot of interest Bouyer et al. (Inf. Process.
Lett. 98(5):188–194, 2006), Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008), Brihaye et al. (Proceedings of FORMATS/FTRTFT’04, LNCS, vol. 3253, pp. 277–292, Springer, Berlin, 2004), Brihaye et al. (Inf. Comput. 204(3):408–433, 2006) in studying the model checking problem of weighted timed automata. The properties of interest are written using logic weighted
CTL (WCTL), an extension of CTL with costs. It has been shown Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008) that the problem of model checking WTAs with a single clock using WCTL with no external cost variables is decidable, while 3 clocks render the problem undecidable Bouyer et al. (Inf. Process. Lett. 98(5):188–194, 2006). The question of 2 clocks is open. In this paper, we introduce a subclass of weighted timed automata called weighted integer
reset timed automata (WIRTA) and study the model checking problem. We give a clock reduction technique for WIRTA. Given a
WIRTA A\mathcal{A} with n≥1 clocks, we show that a single clock WIRTA A¢\mathcal{A}' preserving the paths and costs of A\mathcal{A} can be obtained. This gives us the decidability of model checking WIRTA with n≥1 clocks and m≥1 costs using WCTL with no external cost variables. We then show that for a restricted version of WCTL with external cost variables, the model checking problem is undecidable for WIRTA with 3 stopwatch costs and 1 clock. Finally, we show that model checking
WTA with 2 clocks and 1 stopwatch cost against WCTL with no external cost variables is undecidable, thereby answering a question that has remained long open. 相似文献
17.
The steel mill slab design problem from the CSPLIB is a combinatorial optimization problem motivated by an application of the steel industry. It has been widely studied in
the constraint programming community. Several methods were proposed to solve this problem. A steel mill slab library was created
which contains 380 instances. A closely related binpacking problem called the multiple knapsack problem with color constraints,
originated from the same industrial problem, was discussed in the integer programming community. In particular, a simple integer
program for this problem has been given by Forrest et al. (INFORMS J Comput 18:129–134, 2006). The aim of this paper is to bring these different studies together. Moreover, we adapt the model of Forrest et al. (INFORMS
J Comput 18:129–134, 2006) for the steel mill slab design problem. Using this model and a state-of-the-art integer program solver all instances of the steel mill slab library can be solved efficiently to optimality. We improved, thereby, the solution values
of 76 instances compared to previous results (Schaus et al., Constraints 16:125–147, 2010). Finally, we consider a recently introduced variant of the steel mill slab design problem, where within all solutions which
minimize the leftover one is interested in a solution which requires a minimum number of slabs. For that variant we introduce
two approaches and solve all instances of the steel mill slab library with this slightly changed objective function to optimality. 相似文献
18.
In this paper, we propose a unified primal-dual algorithm framework for two classes of problems that arise from various signal
and image processing applications. We also show the connections to existing methods, in particular Bregman iteration (Osher
et al., Multiscale Model. Simul. 4(2):460–489, 2005) based methods, such as linearized Bregman (Osher et al., Commun. Math. Sci. 8(1):93–111, 2010; Cai et al., SIAM J. Imag. Sci. 2(1):226–252, 2009, CAM Report 09-28, UCLA, March 2009; Yin, CAAM Report, Rice University, 2009) and split Bregman (Goldstein and Osher, SIAM J. Imag. Sci., 2, 2009). The convergence of the general algorithm framework is proved under mild assumptions. The applications to ℓ
1 basis pursuit, TV−L
2 minimization and matrix completion are demonstrated. Finally, the numerical examples show the algorithms proposed are easy
to implement, efficient, stable and flexible enough to cover a wide variety of applications. 相似文献
19.
Image restoration has been an active research topic and variational formulations are particularly effective in high quality
recovery. Although there exist many modelling and theoretical results, available iterative solvers are not yet robust in solving
such modeling equations. Recent attempts on developing optimisation multigrid methods have been based on first order conditions.
Different from this idea, this paper proposes to use piecewise linear function spanned subspace correction to design a multilevel
method for directly solving the total variation minimisation. Our method appears to be more robust than the primal-dual method
(Chan et al., SIAM J. Sci. Comput. 20(6), 1964–1977, 1999) previously found reliable. Supporting numerical results are presented. 相似文献
20.
Susanne Albers 《Algorithmica》2010,58(2):461-477
We study web caching with request reordering. The goal is to maintain a cache of web documents so that a sequence of requests
can be served at low cost. To improve cache hit rates, a limited reordering of requests is allowed. Feder et al. (Proceedings
of the 13th ACM–SIAM Symposium on Discrete Algorithms, pp. 104–105, 2002), who recently introduced this problem, considered caches of size 1, i.e. a cache can store one document. They presented
an offline algorithm based on dynamic programming as well as online algorithms that achieve constant factor competitive ratios.
For arbitrary cache sizes, Feder et al. (Theor. Comput. Sci. 324:201–218, 2004) gave online strategies that have nearly optimal competitive ratios in several cost models. 相似文献