首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The paper concerns policies for sequencing material through a flexible manufacturing system to meet desired production goals for each part type. The authors demonstrate by examples that cyclic material flow and certain distributed scheduling policies can lead to instability in the sense that the required buffer levels are unbounded. This can be the case even when the set-up times for changing part types are zero. Sufficient conditions are then derived under which a class of distributed policies is stable. Finally, a general supervisory mechanism is presented which will stabilize any scheduling policy (i.e. maintain bounded buffer sizes at all machines) while satisfying the desired production rates  相似文献   

2.
It is well known from linear systems theory that an integral control law is needed for asymptotic set-point regulation under parameter perturbations. The note presents a similar result fur a class of nonlinear systems in the presence of an unknown equilibrium due to uncertain nonlinearities and dynamic uncertainties. Both partial state and output feedback cases are considered. Sufficient small-gain type conditions are identified for existence of linear and nonlinear control laws. A procedure for robust nonlinear integral controller design is presented and illustrated via a practical example of fan speed control  相似文献   

3.
This paper presents an approach to couple path planning and control for mobile robot navigation in a hybrid control framework. We build upon an existing hybrid control approach called sequential composition, in which a set of feedback control policies are prescribed on well-defined domains contained in the robot’s free space. Each control policy drives the robot to a goal set, which lies in the domain of a subsequent policy. Control policies are deployed into the free state space so that when composed among one another, the overall action of the set of control policies drives the robot to perform a task, such as moving from a start to a goal location or patrolling a perimeter. A planner determines the sequence of control policies to be invoked. When control policies defined in this framework respect the low-level dynamics and kinematics of the system, this formal approach guarantees that high-level tasks are either accomplished by a given set of policies, or verifies that the tasks are not achievable with the given policies.  相似文献   

4.
The problem of adaptive dual control of discrete-time distributed-parameter stochastic systems is examined. It is shown that there exists an important difference between feedback and closed-loop policies of control for this type of system as for the lumped parameter case. This difference is based on the adaptivity feature of the control. Namely, when the control policy affects both the state and its uncertainty (dual effect) it possesses the so-called feature of active adaptivity and can only be a characteristic of a closed-loop policy, whereas a feedback policy can only be passively adaptive. These results can be used to develop a control algorithm for non-linear problems for which the realization of optimal control laws involves control strategies with both learning and control features.  相似文献   

5.
A method is presented for designing a full state feedback linear control law that will ensure the robust stability and performance of a given linear uncertain system. The systems under consideration are described by state equations that depend on uncertain parameters. These uncertain parameters may be time varying. Their values are constrained to lie within known compact bounding sets. The method is based on the guaranteed cost control concept of Chang and Peng (1972). The controller gains result from the solution of a Riccati equation in which the weighting matrices depend on the uncertainty bounds. Sufficient conditions for the existence of a solution arise from the standard LQG control theory.  相似文献   

6.
Uncertainty in motion planning is often caused by three main sources: motion error, sensing error, and imperfect environment map. Despite the significant effect of all three sources of uncertainty to motion planning problems, most planners take into account only one or at most two of them. We propose a new motion planner, called Guided Cluster Sampling (GCS), that takes into account all three sources of uncertainty for robots with active sensing capabilities. GCS uses the Partially Observable Markov Decision Process (POMDP) framework and the point-based POMDP approach. Although point-based POMDPs have shown impressive progress over the past few years, it performs poorly when the environment map is imperfect. This poor performance is due to the extremely high dimensional state space, which translates to the extremely large belief space?B. We alleviate this problem by constructing a more suitable sampling distribution based on the observations that when the robot has active sensing capability, B can be partitioned into a collection of much smaller sub-spaces, and an optimal policy can often be generated by sufficient sampling of a small subset of the collection. Utilizing these observations, GCS samples B in two-stages, a subspace is sampled from the collection and then a belief is sampled from the subspace. It uses information from the set of sampled sub-spaces and sampled beliefs to guide subsequent sampling. Simulation results on marine robotics scenarios suggest that GCS can generate reasonable policies for motion planning problems with uncertain motion, sensing, and environment map, that are unsolvable by the best point-based POMDPs today. Furthermore, GCS handles POMDPs with continuous state, action, and observation spaces. We show that for a class of POMDPs that often occur in robot motion planning, given enough time, GCS converges to the optimal policy. To the best of our knowledge, this is the first convergence result for point-based POMDPs with continuous action space.  相似文献   

7.
The authors consider multiple part-type single-machine manufacturing systems subject to failures. They provide necessary and sufficient conditions for the optimality of just-in-time (JIT) manufacturing, and further show that the necessary and sufficient conditions coincide when the backlog costs for all the parts are equal. In general, finding optimal hedging policies for multiple part-type problems is extremely difficult. They provide bounds on the optimal hedging points and also prove that, under an optimal hedging policy, the buffer levels will be upper bounded  相似文献   

8.
Most identification methods rely on the assumption that the input is known exactly. However, when collecting data under an identification experiment it may not be possible to avoid noise when measuring the input signal. In the paper some different ways to identify systems from noisy data are discussed. Sufficient conditions for identifiability are given. Also accuracy properties and the computational requirements are discussed. A promising approach is to treat the measured input and output signals as outputs of a multivariable stochastic system. If a prediction error method is applied using this approach the system will be identifiable under mild conditions.  相似文献   

9.
The goal of dialogue management in a spoken dialogue system is to take actions based on observations and inferred beliefs. To ensure that the actions optimize the performance or robustness of the system, researchers have turned to reinforcement learning methods to learn policies for action selection. To derive an optimal policy from data, the dynamics of the system is often represented as a Markov Decision Process (MDP), which assumes that the state of the dialogue depends only on the previous state and action. In this article, we investigate whether constraining the state space by the Markov assumption, especially when the structure of the state space may be unknown, truly affords the highest reward. In simulation experiments conducted in the context of a dialogue system for interacting with a speech-enabled web browser, models under the Markov assumption did not perform as well as an alternative model which classifies the total reward with accumulating features. We discuss the implications of the study as well as its limitations.  相似文献   

10.
参数不确定T-S模糊交联系统的分散保成本控制   总被引:1,自引:0,他引:1  
研究了参数不确定T-S模糊交联系统的分散保成本控制问题.对于给定系统所容许的所有不确定参数,设计了分散状态反馈保成本控制器,使得闭环系统不仅渐近稳定而且具有适当的性能指标上界.基于线性矩阵不等式(LM I)处理方法,给出了分散控制器存在的充分条件.仿真算例表明了所给方法的简易、有效.  相似文献   

11.
In model-based networked control systems (MB-NCSs), an explicit model of the plant is used to produce an estimate of the plant state behavior between transmission times. In this paper, the stability of MB-NCSs is studied when the controller/actuator is updated with the sensor information at nonconstant time intervals. Networked control systems with transmission times that are varying either within a time interval or are driven by a stochastic process with identically independently distributed and Markov-chain driven transmission times are studied. Sufficient conditions for Lyapunov stability are derived. For stochastically modeled transmission times almost sure stability and mean-square sufficient conditions for stability are introduced.  相似文献   

12.
Collaboration among organizations or individuals is common.While these participants are often unwilling to share all their information with each other, some information sharing is unavoidable when achieving a common goal. The need to share information and the desire to keep it confidential are two competing notions which affect the outcome of a collaboration. This paper proposes a formal model of collaboration which addresses confidentiality concerns. We draw on the notion of a plan which originates in the AI literature. We use data confidentiality policies to assess confidentiality in transition systems whose actions have an equal number of predicates in their pre- and post-conditions. Under two natural notions of policy compliance, we show that it is PSPACE-complete to schedule a plan leading from a given initial state to a desired goal state while simultaneously deciding compliance with respect to the agents?? policies.  相似文献   

13.
The authors address the problem of disturbance attenuation with internal stability via output feedback for a class of passive systems with uncertainties. The problem is approached by means of adaptive output feedback control which does not require any state observer. The results obtained extend an earlier result of Steinberg and Corless (1985). Sufficient conditions are proposed under which a nonlinear system can be made locally or globally passive via output feedback  相似文献   

14.
Sufficient conditions for the null controllability of discrete-time systems with control constraints and state saturation are presented. This is accomplished in two stages: First, we establish a result for the asymptotic stability of such systems under zero input conditions. Next, we utilize this stability result to obtain a condition for the null controllability of such systems. We include specific examples to demonstrate the applicability of the present results.  相似文献   

15.
切换系统输入对状态稳定性的充分条件   总被引:11,自引:1,他引:11  
赵军  聂宏 《自动化学报》2003,29(2):252-257
研究了切换系统输入对状态的稳定性.在所有子系统都是输入对状态稳定的条件下, 利用各子系统的KY函数和K函数构造出切换系统所需的KY函数和X函数,从而给出了切换系 统输入对状态稳定的充分条件.对于一类常见的切换系统,计算出保证输入对状态稳定性的切 换停留时间的下界,所有结果都是构造性的.  相似文献   

16.
The stabilization of linear continuous-time systems with time delay in the state and subject to saturating controls is addressed. Sufficient conditions obtained via a linear matrix inequality (LMI) formulation are stated to guarantee both the local stabilization and the satisfaction of some performance requirements. The method of synthesis consists in determining simultaneously a state feedback control law and an associated domain of safe admissible states for which the stability of the closed-loop system is guaranteed when control saturations effectively occur  相似文献   

17.
CTMDP基于随机平稳策略的仿真优化算法   总被引:2,自引:2,他引:2  
基于Markov性能势理论和神经元动态规划(NDP)方法,研究一类连续时间Markov决 策过程(MDP)在随机平稳策略下的仿真优化问题,给出的算法是把一个连续时间过程转换成其 一致化Markov链,然后通过其单个样本轨道来估计平均代价性能指标关于策略参数的梯度,以 寻找次优策略,该方法适合于解决大状态空间系统的性能优化问题.并给出了一个受控Markov 过程的数值实例.  相似文献   

18.
With the development of policy management systems, policy-based management has been introduced in cross-domain organization collaborations and system integrations. Theoretically, cross-domain policy enforcement is possible, but in reality different systems from different organizations or domains have very different high-level policy representations and low-level enforcement mechanisms, such as security policies and privacy configurations. To ensure the compatibility and enforceability of one policy set in another domain, a simulation environment is needed prior to actual policy deployment and enforcement code development. In most cases, we have to manually write enforcement codes for all organizations or domains involved in every collaboration activity, which is a huge task. The goal of this paper is to propose an enforcement architecture and develop a simulation framework for cross-domain policy enforcement. The entire environment is used to simulate the problem of enforcing policies across domain boundaries when permanent or temporary collaborations have to span multiple domains. The middleware derived from this simulation environment can also be used to generate policy enforcement components directly for permanent integration or temporary interaction. This middleware provides various functions to enforce policies automatically or semi-automatically across domains, such as collecting policies of each participant domain in a new collaboration, generating policy models for each domain, and mapping specific policy rules following these models to different enforcement mechanisms of participant domains.  相似文献   

19.
A decentralized control scheme is proposed for stabilization of interconnected systems consisting of arbitrarily connected, linear, time-invariant multivariable subsystems. Sufficient conditions are given for an interconnected system to be stabilized using only local state feedback. The obtained results are illustrated by an example.  相似文献   

20.
We consider linear control systems under uncertainties. For such systems we solve the problem of constructing worst‐case feedback control policies that are allowed to be corrected at m fixed intermediate time moments. We propose two types of the approximative control policies. All of them guarantee that for all admissible uncertainties the terminal system state lies in a prescribed neighborhood of a given state x* at a given final moment, and the value of the cost function does not exceed a given estimate. It is shown that computation of the estimate for each policy is equivalent to solving a corresponding convex mathematical programming (MP) problem with m decision variables. Based on the solution of the MP problem, we derive simple explicit rules (which can be easily implemented on‐line) for constructing the corresponding control policy in the original control problem. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号