首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study examined the detection of the Braess Paradox by stable dynamics in general congested transportation networks. Stable dynamics, suggested by Nesterov and de Palma (2003), is a new model which describes and provides a stable state of congestion in urban transportation networks. In comparison with the user equilibrium model, which is based on the arc travel time function in analyzing transportation networks, stable dynamics requires few parameters and is coincident with intuitions and observations on congestion. It is therefore expected to be a useful analysis tool for transportation planners. The phenomenon whereby increasing network capacity, for example creating new routes, known as arcs, may decrease its performance is called the Braess Paradox. It has been studied intensively under user equilibrium models with the arc travel time function since it was first demonstrated by Braess (1968). However, the development of a general model to detect the Braess Paradox under stable dynamics models remains an open problem. In this study, we suggest a model to detect the paradox in general networks under stable dynamics. In our model, we decide whether the Braess Paradox will occur in a given network, detect Braess arcs or Braess crosses if the network permits the paradox, and present a numerical example of the application of the model to a given network.  相似文献   

2.
We study the mathematical modeling and numerical simulation of the motion of red blood cells (RBC) and vesicles subject to an external incompressible flow in a microchannel. RBC and vesicles are viscoelastic bodies consisting of a deformable elastic membrane enclosing an incompressible fluid. We provide an extension of the finite element immersed boundary method by Boffi and Gastaldi (Comput Struct 81:491–501, 2003), Boffi et al. (Math Mod Meth Appl Sci 17:1479–1505, 2007), Boffi et al. (Comput Struct 85:775–783, 2007) based on a model for the membrane that additionally accounts for bending energy and also consider inflow/outflow conditions for the external fluid flow. The stability analysis requires both the approximation of the membrane by cubic splines (instead of linear splines without bending energy) and an upper bound on the inflow velocity. In the fully discrete case, the resulting CFL-type condition on the time step size is also more restrictive. We perform numerical simulations for various scenarios including the tank treading motion of vesicles in microchannels, the behavior of ‘healthy’ and ‘sick’ RBC which differ by their stiffness, and the motion of RBC through thin capillaries. The simulation results are in very good agreement with experimentally available data.  相似文献   

3.
In recent macro models with staggered price and wage settings, the presence of variables such as relative price and wage dispersion is prevalent, which leads to the source of bifurcations. In this paper, we illustrate how to detect the existence of a bifurcation in stylized macroeconomic models with Calvo (J Monet Econ 12(3):383–398, 1983) pricing. Following the general approach of Judd (Numerical methods in economics, 1998), we employ l’Hospital’s rule to characterize the first-order dynamics of relative price distortion in terms of its higher-order derivatives. We also show that, as in the usual practice in the literature, the bifurcation can be eliminated through renormalization of model variables. Furthermore, we demonstrate that the second-order approximate solutions under this renormalization and under bifurcations can differ significantly.  相似文献   

4.
This paper aims at presenting and discussing definitions, typologies and models of cooperation or competition between human operators and at trying to apply them to analyze the cooperative and competitive activities of the car drivers. It pays special attention on a so-called Benefit-Cost-Deficit model to analyze cooperation or competition between human operators in terms of both positive and negative consequences. The application of such a model to assess the car drivers’ activities focuses on three interactive organizational levels: the coordination between drivers directed by the Highway Code, the road infrastructure on which these drivers are moving and the traffic flow.  相似文献   

5.
Train driving is primarily a visual task; train drivers are required to monitor the dynamic scene visually both outside and inside the train cab. Poor performance on this visual task may lead to errors, such as signals passed at danger. It is therefore important to understand the visual strategies that train drivers employ when monitoring and searching the visual scene for key items, such as signals. Prior to this investigation, a pilot study had already been carried out using an eye tracking technique to investigate train drivers’ visual behaviour and to collect data on driver monitoring of the visual environment, Groeger et al. (2003) Pilot study of train drivers’ eye movements, University of Surrey. However, a larger set of data was needed in order to understand more fully train driver visual behaviour and strategies. In light of this need, the Transport Research Laboratory produced a methodology for the assessment of UK train driver visual strategies, on behalf of the Rail Safety and Standards Board and applied this methodology to conduct a large-scale trial. The study collected a wealth of data on train drivers’ visual behaviour with the aim of providing a greater understanding of the strategies adopted. The corneal dark-eye tracking system chosen for these trials tracks human visual search and scanning patterns, and was fitted to 86 drivers whilst driving in-service trains. Data collected include the duration and frequency of glances made towards different elements of the visual scene. In addition, the train drivers were interviewed after driving the routes, to try and understand the thought processes behind the behaviour observed. Statistical analysis of over 600 signal approaches was conducted. This analysis revealed that signal aspect, preceding signal aspect, signal type and signal complexity are important factors, which affect the visual behaviour of train drivers. Train driver interview data revealed that driver expectation also plays a significant role in train driving. The findings of this study have implications for the rail industry in terms of infrastructure design, design of the driving task and driver training. However, train driving is extremely complex and the data from this study only begin to describe and explain train driver visual strategies in the specific context of signal approaches. This study has provided a wealth of data and further analysis of it is needed to investigate the role of other factors and the complex relationships between factors during signal approaches and other driving situations systematically. Finally, there are important aspects of visual behaviour that cannot be examined using these data or this method. Investigation of other aspects of visual behaviour, such as peripheral vision, will require other methods such as simulation.  相似文献   

6.
Predicting air damping is crucial in the design of high Q microelectromechanical systems. In the past, air damping of rigid microbeam in free space at molecular regime is usually estimated using the free molecular model proposed by Christian (Vacuum 16:175–178, 1966), air damping of flexible microbeam is estimated using the model proposed by Blom (J Vac Sci Technol B 10:19–26, 1992). The relation between the two models is Q Blom = 3Q Christian. In this paper, a general proof is presented that shows the Christian’s model is valid for the air damping of flexible microbeam in free space at molecular regime. By comparing with the experimental results available in the literatures (Blom et al. in J Vac Sci Technol B 10:19–26, 1992; Yasumura et al. in J Micromech Syst 9:117–125, 2000), we conclude that the Christian’s model is the best choice in predicting the air damping of flexible microbeam in free space at the molecular regime.  相似文献   

7.
In this paper we introduce a minimax model unifying several classes of single facility planar center location problems. We assume that the transportation costs of the demand points to the serving facility are convex functions {Q i }, i=1,…,n, of the planar distance used. Moreover, these functions, when properly transformed, give rise to piecewise quadratic functions of the coordinates of the facility location. In the continuous case, using results on LP-type models by Clarkson (J. ACM 42:488–499, 1995), Matoušek et al. (Algorithmica 16:498–516, 1996), and the derandomization technique in Chazelle and Matoušek (J. Algorithms 21:579–597, 1996), we claim that the model is solvable deterministically in linear time. We also show that in the separable case, one can get a direct O(nlog n) deterministic algorithm, based on Dyer (Proceedings of the 8th ACM Symposium on Computational Geometry, 1992), to find an optimal solution. In the discrete case, where the location of the center (server) is restricted to some prespecified finite set, we introduce deterministic subquadratic algorithms based on the general parametric approach of Megiddo (J. ACM 30:852–865, 1983), and on properties of upper envelopes of collections of quadratic arcs. We apply our methods to solve and improve the complexity of a number of other location problems in the literature, and solve some new models in linear or subquadratic time complexity.  相似文献   

8.
Li P  Banerjee S  McBean AM 《GeoInformatica》2011,15(3):435-454
Statistical models for areal data are primarily used for smoothing maps revealing spatial trends. Subsequent interest often resides in the formal identification of ‘boundaries’ on the map. Here boundaries refer to ‘difference boundaries’, representing significant differences between adjacent regions. Recently, Lu and Carlin (Geogr Anal 37:265–285, 2005) discussed a Bayesian framework to carry out edge detection employing a spatial hierarchical model that is estimated using Markov chain Monte Carlo (MCMC) methods. Here we offer an alternative that avoids MCMC and is easier to implement. Our approach resembles a model comparison problem where the models correspond to different underlying edge configurations across which we wish to smooth (or not). We incorporate these edge configurations in spatially autoregressive models and demonstrate how the Bayesian Information Criteria (BIC) can be used to detect difference boundaries in the map. We illustrate our methods with a Minnesota Pneumonia and Influenza Hospitalization dataset to elicit boundaries detected from the different models.  相似文献   

9.
Weighted timed automata (WTA), introduced in Alur et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 49–62, Springer, Berlin, 2001), Behrmann et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 147–161, Springer, Berlin, 2001) are an extension of Alur and Dill (Theor. Comput. Sci. 126(2):183–235, 1994) timed automata, a widely accepted formalism for the modelling and verification of real time systems. Weighted timed automata extend timed automata by allowing costs on the locations and edges. There has been a lot of interest Bouyer et al. (Inf. Process. Lett. 98(5):188–194, 2006), Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008), Brihaye et al. (Proceedings of FORMATS/FTRTFT’04, LNCS, vol. 3253, pp. 277–292, Springer, Berlin, 2004), Brihaye et al. (Inf. Comput. 204(3):408–433, 2006) in studying the model checking problem of weighted timed automata. The properties of interest are written using logic weighted CTL (WCTL), an extension of CTL with costs. It has been shown Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008) that the problem of model checking WTAs with a single clock using WCTL with no external cost variables is decidable, while 3 clocks render the problem undecidable Bouyer et al. (Inf. Process. Lett. 98(5):188–194, 2006). The question of 2 clocks is open. In this paper, we introduce a subclass of weighted timed automata called weighted integer reset timed automata (WIRTA) and study the model checking problem. We give a clock reduction technique for WIRTA. Given a WIRTA A\mathcal{A} with n≥1 clocks, we show that a single clock WIRTA A¢\mathcal{A}' preserving the paths and costs of A\mathcal{A} can be obtained. This gives us the decidability of model checking WIRTA with n≥1 clocks and m≥1 costs using WCTL with no external cost variables. We then show that for a restricted version of WCTL with external cost variables, the model checking problem is undecidable for WIRTA with 3 stopwatch costs and 1 clock. Finally, we show that model checking WTA with 2 clocks and 1 stopwatch cost against WCTL with no external cost variables is undecidable, thereby answering a question that has remained long open.  相似文献   

10.
The notion of P-simple points was introduced by Bertrand to conceive parallel thinning algorithms. In ‘A 3D fully parallel thinning algorithm for generating medial faces’ (Pattern Recogn. Lett. 16:83–87, 1995), Ma proposed an algorithm for which there are objects whose topology is not preserved. In this paper, we propose a new application of P-simple points: to automatically correct Ma’s algorithm.  相似文献   

11.
This study provides a step further in the computation of the transition path of a continuous time endogenous growth model discussed by Privileggi (Nonlinear dynamics in economics, finance and social sciences: essays in honour of John Barkley Rosser Jr., Springer, Berlin, Heidelberg, pp. 251–278, 2010)—based on the setting first introduced by Tsur and Zemel (J Econ Dyn Control 31:3459–3477, 2007)—in which knowledge evolves according to the Weitzman (Q J Econ 113:331–360, 1998) recombinant process. A projection method, based on the least squares of the residual function corresponding to the ODE defining the optimal policy of the ‘detrended’ model, allows for the numeric approximation of such policy for a positive Lebesgue measure range of values of the efficiency parameter characterizing the probability function of the recombinant process. Although the projection method’s performance rapidly degenerates as one departs from a benchmark value for the efficiency parameter, we are able to numerically compute time-path trajectories which are sufficiently regular to allow for sensitivity analysis under changes in parameters’ values.  相似文献   

12.
The theory of analog computation aims at modeling computational systems that evolve in a continuous space. Unlike the situation with the discrete setting there is no unified theory of analog computation. There are several proposed theories, some of them seem quite orthogonal. Some theories can be considered as generalizations of the Turing machine theory and classical recursion theory. Among such are recursive analysis and Moore’s class of recursive real functions. Recursive analysis was introduced by Turing (Proc Lond Math Soc 2(42):230–265, 1936), Grzegorczyk (Fundam Math 42:168–202, 1955), and Lacombe (Compt Rend l’Acad Sci Paris 241:151–153, 1955). Real computation in this context is viewed as effective (in the sense of Turing machine theory) convergence of sequences of rational numbers. In 1996 Moore introduced a function algebra that captures his notion of real computation; it consists of some basic functions and their closure under composition, integration and zero-finding. Though this class is inherently unphysical, much work have been directed at stratifying, restricting, and comparing it with other theories of real computation such as recursive analysis and the GPAC. In this article we give a detailed exposition of recursive analysis and Moore’s class and the relationships between them.  相似文献   

13.
In this paper we present new control algorithms for robots with dynamics described in terms of quasi-velocities (Kozłowski, Identification of articulated body inertias and decoupled control of robots in terms of quasi-coordinates. In: Proc. of the 1996 IEEE International Conference on Robotics and Automation, pp. 317–322. IEEE, Piscataway, 1996a; Zeitschrift für Angewandte Mathematik und Mechanik 76(S3):479–480, 1996c; Robot control algorithms in terms of quasi-coordinates. In: Proc. of the 34 Conference on Decision and Control, pp. 3020–3025, Kobe, 11–13 December 1996, 1996d). The equations of motion are written using spatial quantities such as spatial velocities, accelerations, forces, and articulated body inertia matrices (Kozłowski, Standard and diagonalized Lagrangian dynamics: a comparison. In: Proc. of the 1995 IEEE Int. Conf. on Robotics and Automation, pp. 2823–2828. IEEE, Piscataway, 1995b; Rodriguez and Kreutz, Recursive Mass Matrix Factorization and Inversion, An Operator Approach to Open- and Closed-Chain Multibody Dynamics, pp. 88–11. JPL, Dartmouth, 1998). The forward dynamics algorithms incorporate new control laws in terms of normalized quasi-velocities. Two cases are considered: end point trajectory tracking and trajectory tracking algorithm, in general. It is shown that by properly choosing the Lyapunov function candidate a dynamic system with appropriate feedback can be made asymptotically stable and follows the desired trajectory in the task space. All of the control laws have a new architecture in the sense that they are derived, in the so-called quasi-velocity and quasi-force space, and at any instant of time generalized positions and forces can be recovered from order recursions, where denotes the number of degrees of freedom of the manipulator. This paper also contains the proposition of a sliding mode control, originally introduced by Slotine and Li (Int J Rob Res 6(3):49–59, 1987), which has been extended to the sliding mode control in the quasi-velocity and quasi-force space. Experimental results illustrate behavior of the new control schemes and show the potential of the approach in the quasi-velocity and quasi-force space. Authors are with Chair of Control and Systems Engineering.  相似文献   

14.
Microfluidic systems are increasingly popular for rapid and cheap determinations of enzyme assays and other biochemical analysis. In this study reduced order models (ROM) were developed for the optimization of enzymatic assays performed in a microchip. The model enzyme assay used was β-galactosidase (β-Gal) that catalyzes the conversion of Resorufin β-d-galactopyranoside (RBG) to a fluorescent product as previously reported by Hadd et al. (Anal Chem 69(17): 3407–3412, 1997). The assay was implemented in a microfluidic device as a continuous flow system controlled electrokinetically and with a fluorescence detection device. The results from ROM agreed well with both computational fluid dynamic (CFD) simulations and experimental values. While the CFD model allowed for assessment of local transport phenomena, the CPU time was significantly reduced by the ROM approach. The operational parameters of the assay were optimized using the validated ROM to significantly reduce the amount of reagents consumed and the total biochip assay time. After optimization the analysis time would be reduced from 20 to 5.25 min which would also resulted in 50% reduction in reagent consumption.  相似文献   

15.
Dynamic Traffic Assignment with More Flexible Modelling within Links   总被引:1,自引:1,他引:0  
Traffic network models tend to become very large even for medium-size static assignment problems. Adding a time dimension, together with time-varying flows and travel times within links and queues, greatly increases the scale and complexity of the problem. In view of this, to retain tractability in dynamic traffic assignment (DTA) formulations, especially in mathematical programming formulations, additional assumptions are normally introduced. In particular, the time varying flows and travel times within links are formulated as so-called whole-link models. We consider the most commonly used of these whole-link models and some of their limitations.In current whole-link travel-time models, a vehicle's travel time on a link is treated as a function only of the number of vehicles on the link at the moment the vehicle enters. We first relax this by letting a vehicle's travel time depend on the inflow rate when it enters and the outflow rate when it exits. We further relax the dynamic assignment formulation by stating it as a bi-level program, consisting of a network model and a set of link travel time sub-models, one for each link. The former (the network model) takes the link travel times as bounded and assigns flows to links and routes. The latter (the set of link models) does the reverse, that is, takes link inflows as given and finds bounds on link travel times. We solve this combined model by iterating between the network model and link sub-models until a consistent solution is found. This decomposition allows a much wider range of link flow or travel time models to be used. In particular, the link travel time models need not be whole-link models and can be detailed models of flow, speed and density varying along the link. In our numerical examples, algorithms designed to solve this bi-level program converged quickly, but much remains to be done in exploring this approach further. The algorithms for solving the bi-level formulation may be interpreted as traveller learning behaviour, hence as a day-to-day traffic dynamics. Thus, even though in our experiments the algorithms always converged, their behaviour is still of interest even if they cycled rather than converged. Directions for further research are noted. The bi-level model can be extended to handle issues and features similar to those addressed by other DTA models.  相似文献   

16.
Programming robot behavior remains a challenging task. While it is often easy to abstractly define or even demonstrate a desired behavior, designing a controller that embodies the same behavior is difficult, time consuming, and ultimately expensive. The machine learning paradigm offers the promise of enabling “programming by demonstration” for developing high-performance robotic systems. Unfortunately, many “behavioral cloning” (Bain and Sammut in Machine intelligence agents. London: Oxford University Press, 1995; Pomerleau in Advances in neural information processing systems 1, 1989; LeCun et al. in Advances in neural information processing systems 18, 2006) approaches that utilize classical tools of supervised learning (e.g. decision trees, neural networks, or support vector machines) do not fit the needs of modern robotic systems. These systems are often built atop sophisticated planning algorithms that efficiently reason far into the future; consequently, ignoring these planning algorithms in lieu of a supervised learning approach often leads to myopic and poor-quality robot performance. While planning algorithms have shown success in many real-world applications ranging from legged locomotion (Chestnutt et al. in Proceedings of the IEEE-RAS international conference on humanoid robots, 2003) to outdoor unstructured navigation (Kelly et al. in Proceedings of the international symposium on experimental robotics (ISER), 2004; Stentz et al. in AUVSI’s unmanned systems, 2007), such algorithms rely on fully specified cost functions that map sensor readings and environment models to quantifiable costs. Such cost functions are usually manually designed and programmed. Recently, a set of techniques has been developed that explore learning these functions from expert human demonstration. These algorithms apply an inverse optimal control approach to find a cost function for which planned behavior mimics an expert’s demonstration. The work we present extends the Maximum Margin Planning (MMP) (Ratliff et al. in Twenty second international conference on machine learning (ICML06), 2006a) framework to admit learning of more powerful, non-linear cost functions. These algorithms, known collectively as LEARCH (LEArning to seaRCH), are simpler to implement than most existing methods, more efficient than previous attempts at non-linearization (Ratliff et al. in NIPS, 2006b), more naturally satisfy common constraints on the cost function, and better represent our prior beliefs about the function’s form. We derive and discuss the framework both mathematically and intuitively, and demonstrate practical real-world performance with three applied case-studies including legged locomotion, grasp planning, and autonomous outdoor unstructured navigation. The latter study includes hundreds of kilometers of autonomous traversal through complex natural environments. These case-studies address key challenges in applying the algorithm in practical settings that utilize state-of-the-art planners, and which may be constrained by efficiency requirements and imperfect expert demonstration.
J. Andrew BagnellEmail:
  相似文献   

17.
In several works, Buckley (Soft Comput 9:512–518, 2005a; Soft Comput 9:769–775 2005b; Fuzzy statistics, Springer, Heidelberg, 2005c) have introduced and developed an approach to the estimation of unknown parameters in statistical models. In this paper, we introduce an improved method for the estimation of parameters for cases in which the Buckley’s approach presents some drawbacks, as for example when the underlying statistic has a non-symmetric distribution.  相似文献   

18.
A critical problem in software development is the monitoring, control and improvement in the processes of software developers. Software processes are often not explicitly modeled, and manuals to support the development work contain abstract guidelines and procedures. Consequently, there are huge differences between ‘actual’ and ‘official’ processes: “the actual process is what you do, with all its omissions, mistakes, and oversights. The official process is what the book, i.e., a quality manual, says you are supposed to do” (Humphrey in A discipline for software engineering. Addison-Wesley, New York, 1995). Software developers lack support to identify, analyze and better understand their processes. Consequently, process improvements are often not based on an in-depth understanding of the ‘actual’ processes, but on organization-wide improvement programs or ad hoc initiatives of individual developers. In this paper, we show that, based on particular data from software development projects, the underlying software development processes can be extracted and that automatically more realistic process models can be constructed. This is called software process mining (Rubin et al. in Process mining framework for software processes. Software process dynamics and agility. Springer Berlin, Heidelberg, 2007). The goal of process mining is to better understand the development processes, to compare constructed process models with the ‘official’ guidelines and procedures in quality manuals and, subsequently, to improve development processes. This paper reports on process mining case studies in a large industrial company in The Netherlands. The subject of the process mining is a particular process: the change control board (CCB) process. The results of process mining are fed back to practice in order to subsequently improve the CCB process.  相似文献   

19.
The potential flow equations which govern the free-surface motion of an ideal fluid (the water wave problem) are notoriously difficult to solve for a number of reasons. First, they are a classical free-boundary problem where the domain shape is one of the unknowns to be found. Additionally, they are strongly nonlinear (with derivatives appearing in the nonlinearity) without a natural dissipation mechanism so that spurious high-frequency modes are not damped. In this contribution we address the latter of these difficulties using a surface formulation (which addresses the former complication) supplemented with physically-motivated viscous effects recently derived by Dias et al. (Phys. Lett. A 372:1297–1302, 2008). The novelty of our approach is to derive a weakly nonlinear model from the surface formulation of Zakharov (J. Appl. Mech. Tech. Phys. 9:190–194, 1968) and Craig and Sulem (J. Comput. Phys. 108:73–83, 1993), complemented with the viscous effects mentioned above. Our new model is simple to implement while being both faithful to the physics of the problem and extremely stable numerically.  相似文献   

20.
Pursuing our work in Tone (Asymptot. Analysis 51:231–245, 2007) and Tone and Wirosoetisno (SIAM J. Number. Analysis 44:29–40, 2006), we consider in this article the two-dimensional magnetohydrodynamics equations, we discretize these equations in time using the implicit Euler scheme and with the aid of the classical and uniform discrete Gronwall lemma, we prove that the scheme is H 2-uniformly stable in time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号