首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
A relevant issue in the domain of natural argumentation and persuasion is the interaction (synergic or conflicting) between “rational” or “cognitive” modes of persuasion and “irrational” or “emotional” ones. This work provides a model of general persuasion and emotional persuasion. We examine two basic modes for appealing to emotions, arguing that emotional persuasion does not necessarily coincide with irrational persuasion, and showing how the appeal to emotions is grounded on the strict and manifold relationship between emotions and goals, which is, so to say, “exploited” by a persuader. We describe various persuasion strategies, propose a method to formalize and represent them as oriented graphs, and show how emotional and non-emotional strategies (and also emotional and non-emotional components in the same strategy) may interact with and strengthen each other. Finally, we address the role of uncertainty in persuasion strategies and show how it can be represented in persuasion graphs.  相似文献   

2.
The purpose of this article is to describe how research at the intersection of cognition, technology, and work can be generalized beyond the source context of scientific inquiry and confirmation. Special emphasis is given to resolve confusion about the use of terms such as “ecological validity” and the “real world.” The ultimate goal is to foster a more productive dialog on the merits of where and how research on important cognitive engineering topics, such as cognitive adaptation to change and uncertainty, should be conducted.  相似文献   

3.
We propose a quantum collision model in which the environment is abstractively divided into two hierarchies including “environment-bus” that has direct interactions with the system and “environment-stations” that has not. Based on the model, we investigate the effects of initial system–environment correlations, initial states of environment, and various interactions on the dynamics of open quantum systems associated genuinely with such a hierarchical environment. We illustrate that the initial quantum correlation between the system and environment leads to a transition from Markovian to non-Markovian dynamics, while for initial classical correlation the transition can only be confirmed to happen when the couplings rather than the correlations in environment are present. In addition, we investigate the degree of non-Markovianity varying with environment initial states and reveal that the interaction strength between two environmental hierarchies plays an important role in it. In particular, we show that in such a hierarchically structured environment the degree of non-Markovianity is not equivalent to memory effects of the environment-stations as a reservoir due to the presence of the environment-bus.  相似文献   

4.
Over the years, safety in maritime industries has been reinforced by many state-of-the-art technologies. However, the accident rate hasn’t dropped significantly with the advanced technology onboard. The main cause of this phenomenon is human errors which drive researchers to study human factors in the maritime domain. One of the key factors that contribute to human performance is their mental states such as cognitive workload and stress. In this paper, we propose and implement an Electroencephalogram (EEG)-based psychophysiological evaluation system to be used in maritime virtual simulators for monitoring, training and assessing the seafarers. The system includes an EEG processing part, visualization part, and an evaluation part. By using the processing part of the system, different brain states including cognitive workload and stress can be identified from the raw EEG data recorded during maritime exercises in the simulator. By using the visualization part, the identified brain states, raw EEG signals, and videos recorded during the maritime exercises can be synchronized and displayed together. By using the evaluation part of the system, an indicative recommendation on “pass”, “retrain”, or “fail” of the seafarers’ performance can be obtained based on the EEG-based cognitive workload and stress recognition. Detailed analysis of the demanding events in the maritime tasks is provided by the system for each seafarer that could be used to improve their training. A case study is presented using the proposed system. EEG data from 4 pilots were recorded when they were performing maritime tasks in the simulator. The data are processed and evaluated. The results show that one pilot gets a “pass” recommendation, one pilot gets a “retrain” recommendation, and the other two get “fail” results regarding their performance in the simulator.  相似文献   

5.
6.
We examine carefully the rationale underlying the approaches to belief change taken in the literature, and highlight what we view as methodological problems. We argue that to study belief change carefully, we must be quite explicit about the “ontology” or scenario underlying the belief change process. This is something that has been missing in previous work, with its focus on postulates. Our analysis shows that we must pay particular attention to two issues that have often been taken for granted: the first is how we model the agent's epistemic state. (Do we use a set of beliefs, or a richer structure, such as an ordering on worlds? And if we use a set of beliefs, in what language are these beliefs are expressed?) We show that even postulates that have been called “beyond controversy” are unreasonable when the agent's beliefs include beliefs about her own epistemic state as well as the external world. The second is the status of observations. (Are observations known to be true, or just believed? In the latter case, how firm is the belief?) Issues regarding the status of observations arise particularly when we consider iterated belief revision, and we must confront the possibility of revising by φ and then by ¬ φ.  相似文献   

7.
Customer churn is a notorious problem for most industries, as loss of a customer affects revenues and brand image and acquiring new customers is difficult. Reliable predictive models for customer churn could be useful in devising customer retention plans. We survey and compare some major machine learning techniques that have been used to build predictive customer churn models. Employee churn (or attrition) closely related but not identical to customer churn is similarly painful for an organization, leading to disruptions, customer dissatisfaction and time and efforts lost in finding and training replacement. We present a case study that we carried out for building and comparing predictive employee churn models. We also propose a simple value model for employees that can be used to identify how many of the churned employees were “valuable”. This work has the potential for designing better employee retention plans and improving employee satisfaction.  相似文献   

8.
Blind Quantum Source Separation (BQSS) deals with multi-qubit states, called “mixed states”, obtained by applying an unknown “mixing function” (which typically corresponds to undesired coupling, e.g. between qubits implemented as close electron spins 1/2) to unknown multi-qubit “source states”, which are product states (and pure in the simplest case, considered in this paper). Some other properties are also possibly requested from these source states and/or mixing function. Using mixed states, BQSS systems aim at restoring (the information contained in) source states, during the second phase of their operation (“inversion phase”). To this end, they estimate the unmixing function (inverse of mixing function), during the first phase of their operation (“adaptation phase”). Most previously reported BQSS systems first convert mixed states into classical-form data, that they then process with classical means. Besides, they estimate the unmixing function by using statistical methods related to classical Independent Component Analysis. On the contrary, the new BQSS systems proposed here use only quantum-form data and quantum processing in the inversion phase, and they use classical-form data during the adaptation phase only. Moreover, their unmixing function estimation methods are essentially based on using unentangled source states during that phase. They mainly consist of disentangling the output quantum state of the separating system (for a few source states). Afterwards, they can also restore entangled source states. They yield major improvements over previous systems, concerning restored source parameters, associated indeterminacies and approximations, number of source states required for adaptation, numbers of source state preparations in adaptation and inversion phases. Numerical tests confirm that they accurately restore quantum source states.  相似文献   

9.
We focus on the development of a Lyapunov-based economic model predictive control (LEMPC) method for nonlinear singularly perturbed systems in standard form arising naturally in the modeling of two-time-scale chemical processes. A composite control structure is proposed in which, a “fast” Lyapunov-based model predictive controller (LMPC) using a quadratic cost function which penalizes the deviation of the fast states from their equilibrium slow manifold and the corresponding manipulated inputs, is used to stabilize the fast dynamics while a two-mode “slow” LEMPC design is used on the slow subsystem that addresses economic considerations as well as desired closed-loop stability properties by utilizing an economic (typically non-quadratic) cost function in its formulation and possibly dictating a time-varying process operation. Through a multirate measurement sampling scheme, fast sampling of the fast state variables is used in the fast LMPC while slow-sampling of the slow state variables is used in the slow LEMPC. Appropriate stabilizability assumptions are made and suitable constraints are imposed on the proposed control scheme to guarantee the closed-loop stability and singular perturbation theory is used to analyze the closed-loop system. The proposed control method is demonstrated through a nonlinear chemical process example.  相似文献   

10.
Many continuous industrial processes operate in different steady states with different grades or products. The switching between two steady states is called transition. Transition consists of a series of operation changes that should be carried out in proper order, within certain magnitudes and time region. Since faulty operation may lead to increase in inferior products or even hazard events, monitoring of the transition is desired. In this work, a transition identification and monitoring scheme is proposed based on slow feature analysis. Two monitoring statistics which represent the location of the trajectory and the speed of transition are proposed. Besides, operating faults are generated based on the guidewords of hazard and operability analysis (HAZOP). Using a numerical case and the mode 4-to-2 transition of the Tennessee-Eastman process in which catastrophic failures exist, the effectiveness of the proposed method is validated. In addition to missed detection rate and false alarm rate, two performance indexes known as detection time (DT) and rescue time (RT) are introduced. The advantages of proposed method are benchmarked against the stage-based sub principle component analysis(sub-PCA) and the global preserving statistics slow feature analysis(GSSFA).  相似文献   

11.
We propose an intrinsic developmental algorithm that is designed to allow a mobile robot to incrementally progress through levels of increasingly sophisticated behavior. We believe that the core ingredients for such a developmental algorithm are abstractions, anticipations, and self-motivations. We describe a multilevel, cascaded discovery and control architecture that includes these core ingredients. As a first step toward implementing the proposed architecture, we explore two novel mechanisms: a governor for automatically regulating the training of a neural network and a path-planning neural network driven by patterns of “mental states” that represent protogoals.  相似文献   

12.
Pre-congestion notification (PCN) gives an early warning of congestion by marking packets to protect the quality of service of inelastic flows. PCN defines two rates per link: admissible rate (AR) and supportable rate (SR), which divide the PCN traffic load into three states, “no pre-congestion,” “AR pre-congestion,” and “SR pre-congestion.” PCN admission control and flow termination control operate in accordance with these three states. However, only two PCN encoding states, unmarked and PCN marked, can be used due to the requirement of PCN encoding to survive tunneling through a currently used IPsec tunnel. We propose a marking algorithm, which uses the two encoding states, for distinguishing the three states of PCN traffic load. We also propose new admission and flow termination controls, which are based on the proposed marking algorithm, and evaluate their performance. Markings that require fewer PCN encoding states are preferable because the remaining encoding state can be used for a newly added PCN-based control in the future. Furthermore, distinguishing more states with fewer encoding states benefits not only PCN but also general marking techniques because header fields are limited; thus, valuable.  相似文献   

13.
A Markov chain model is presented as a tool for analysis of damage caused to an inventory of items which are subject to possibly many movements prior to sale. The model is applied to a case study to describe the nature and extent of such damage. However, possible extensions to other more general inventory-marketing distribution problems are clear. The states “item sold” and “item scrapped” are modelled as absorbing states while the locations of the items “in rework” and “at market” are transient states. Standard Markov chain calculations are made which then describe the expected status of the inventory.  相似文献   

14.
The Internet has become a critical tool for information and communication that has reached much of the developed world. It is comprised of numerous components, of which the World Wide Web (WWW) is the largest and fastest growing. Efficient functionality of the WWW depends on a multitude of web servers in operation throughout the world. A web server, in and of itself, is frequently inefficient. We propose several concepts—all of which involve the assimilation of computational intelligence into a web server—that will ultimately improve its performance. A web server would then become a “smart” server that will learn during its operation and subsequently adapt its components and behavior to optimize many of its functions and modes of operation. Furthermore, it will be more secure, robust, and reliable via a hierarchical security system. A smart server will ultimately be more efficient, thus providing an enhanced experience for the multitude of people making use of the WWW every day. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 1139–1154, 2007.  相似文献   

15.
We humans usually think in words; to represent our opinion about, e.g., the size of an object, it is sufficient to pick one of the few (say, five) words used to describe size (“tiny,” “small,” “medium,” etc.). Indicating which of 5 words we have chosen takes 3 bits. However, in the modern computer representations of uncertainty, real numbers are used to represent this “fuzziness.” A real number takes 10 times more memory to store, and therefore, processing a real number takes 10 times longer than it should. Therefore, for the computers to reach the ability of a human brain, Zadeh proposed to represent and process uncertainty in the computer by storing and processing the very words that humans use, without translating them into real numbers (he called this idea granularity). If we try to define operations with words, we run into the following problem: e.g., if we define “tiny” + “tiny” as “tiny,” then we will have to make a counter-intuitive conclusion that the sum of any number of tiny objects is also tiny. If we define “tiny” + “tiny” as “small,” we may be overestimating the size. To overcome this problem, we suggest to use nondeterministic (probabilistic) operations with words. For example, in the above case, “tiny” + “tiny” is, with some probability, equal to “tiny,” and with some other probability, equal to “small.” We also analyze the advantages and disadvantages of this approach: The main advantage is that we now have granularity and we can thus speed up processing uncertainty. The main disadvantage is that in some cases, when defining symmetric associative operations for the set of words, we must give up either symmetry, or associativity. Luckily, this necessity is not always happening: in some cases, we can define symmetric associative operations. © 1997 John Wiley & Sons, Inc.  相似文献   

16.
In this work, we extend the idea of quantum Markov chains (Gudder in J Math Phys 49(7):072105 [3]) in order to propose quantum hidden Markov models (QHMMs). For that, we use the notions of transition operation matrices and vector states, which are an extension of classical stochastic matrices and probability distributions. Our main result is the Mealy QHMM formulation and proofs of algorithms needed for application of this model: Forward for general case and Vitterbi for a restricted class of QHMMs. We show the relations of the proposed model to other quantum HMM propositions and present an example of application.  相似文献   

17.
18.
We extend our previous work on the linguistic summarization of time series data meant as the linguistic summarization of trends, i.e. consecutive parts of the time series, which may be viewed as exhibiting a uniform behavior under an assumed (degree of) granulation, and identified with straight line segments of a piecewise linear approximation of the time series. We characterize the trends by the dynamics of change, duration, and variability. A linguistic summary of a time series is then viewed to be related to a linguistic quantifier driven aggregation of trends. We primarily employ for this purpose the classic Zadeh's calculus of linguistically quantified propositions, which is presumably the most straightforward and intuitively appealing, using the classic minimum operation and mentioning other t‐norms. We also outline the use of the Sugeno and Choquet integrals proposed in our previous papers. We show an application to the absolute performance type analysis of time series data on daily quotations of an investment fund over an 8‐year period, by presenting first an analysis of characteristic features of quotations, under various (degrees of) granulations assumed, and then by listing some more interesting and useful summaries obtained. We propose a convenient presentation of linguistic summaries focused on some characteristic feature exemplified by what happens “almost always,” “very often,” “quite often,” “almost never,” etc. All these analyses are meant to provide means to support a human user to make decisions. © 2010 Wiley Periodicals, Inc.  相似文献   

19.
In this work, we propose a conceptual distributed control framework for electrical grid integrated with distributed renewable energy generation systems in order to enable the development of the so-called “smart electrical grid”. First, we introduce the key elements and their interactions in the proposed control architecture and discuss the design of the distributed control systems which are able to coordinate their actions to account for optimization considerations on the system operation. Subsequently, we focus on a specific wind/solar energy generation system connected to a reverse osmosis water desalination system and the electrical grid and design two supervisory predictive controllers via model predictive control to operate the integrated system taking into account short-term and long-term optimal maintenance and operation considerations, respectively. Simulations are carried out to illustrate the applicability and effectiveness of the proposed approach.  相似文献   

20.
The concept of games with incompetence has been introduced to better represent games where players may not be capable of executing strategies that they select. In particular this paper introduces incompetence into bimatrix games and investigates the properties of such games. The results obtained describe both the general dependence of “extreme Nash equilibrium payoffs” on incompetence and special behaviour arising in particular cases. The dependence of the payoffs can be complex and include non-linearities and transition points. Transition points occur when kernels change and may result in the number of “extreme Nash equilibria” changing. Understanding these changes allows the determination of the benefits of regimes that seek to decrease a player’s incompetence. While the games we consider are normally static, in our context there is a hidden dynamics resulting from the fact that players will strive to improve their equilibrium payoffs by changing their incompetence levels. This might require training, in the case of games like tennis, or it might require the purchase of new equipment costing billions of dollars, in the case of military applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号