首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
In a seminal paper Phan Minh Dung (Artif. Intell. 77(2), 321–357, 1995) developed the theory of abstract argumentation frameworks (AFs), which has remained a pivotal point of reference for research in AI and argumentation ever since. This paper assesses the merits of Dung’s theory from an epistemological point of view. It argues that, despite its prominence in AI, the theory of AFs is epistemologically flawed. More specifically, abstract AFs don’t provide a normatively adequate model for the evaluation of rational, multi-proponent controversy. Different interpretations of Dung’s theory may be distinguished. Dung’s intended interpretation collides with basic principles of rational judgement suspension. The currently prevailing knowledge base interpretation ignores relevant arguments when assessing proponent positions in a debate. It is finally suggested that abstract AFs be better understood as a paraconsistent logic, rather than a theory of real argumentation.  相似文献   

2.
Glushkov’s general approaches to the problem of artificial intelligence are considered. In particular, the history of research conducted in line with the Evidence Algorithm program initiated by V. M. Glushkov is described in detail. The results obtained within this program are analyzed.  相似文献   

3.
4.
In Dijkstra (Commun ACM 17(11):643–644, 1974) introduced the notion of self-stabilizing algorithms and presented three such algorithms for the problem of mutual exclusion on a ring of n processors. The third algorithm is the most interesting of these three but is rather non intuitive. In Dijkstra (Distrib Comput 1:5–6, 1986) a proof of its correctness was presented, but the question of determining its worst case complexity—that is, providing an upper bound on the number of moves of this algorithm until it stabilizes—remained open. In this paper we solve this question and prove an upper bound of 3\frac1318 n2 + O(n){3\frac{13}{18} n^2 + O(n)} for the complexity of this algorithm. We also show a lower bound of 1\frac56 n2 - O(n){1\frac{5}{6} n^2 - O(n)} for the worst case complexity. For computing the upper bound, we use two techniques: potential functions and amortized analysis. We also present a new-three state self-stabilizing algorithm for mutual exclusion and show a tight bound of \frac56 n2 + O(n){\frac{5}{6} n^2 + O(n)} for the worst case complexity of this algorithm. In Beauquier and Debas (Proceedings of the second workshop on self-stabilizing systems, pp 17.1–17.13, 1995) presented a similar three-state algorithm, with an upper bound of 5\frac34n2+O(n){5\frac{3}{4}n^2+O(n)} and a lower bound of \frac18n2-O(n){\frac{1}{8}n^2-O(n)} for its stabilization time. For this algorithm we prove an upper bound of 1\frac12n2 + O(n){1\frac{1}{2}n^2 + O(n)} and show a lower bound of n 2O(n). As far as the worst case performance is considered, the algorithm in Beauquier and Debas (Proceedings of the second workshop on self-stabilizing systems, pp 17.1–17.13, 1995) is better than the one in Dijkstra (Commun ACM 17(11):643–644, 1974) and our algorithm is better than both.  相似文献   

5.
Herman’s algorithm is a synchronous randomized protocol for achieving self-stabilization in a token ring consisting of N processes. The interaction of tokens makes the dynamics of the protocol very difficult to analyze. In this paper we study the distribution of the time to stabilization, assuming that there are three tokens in the initial configuration. We show for arbitrary N and for an arbitrary timeout t that the probability of stabilization within time t is minimized by choosing as the initial three-token configuration the configuration in which the tokens are placed equidistantly on the ring. Our result strengthens a corollary of a theorem of McIver and Morgan (Inf. Process Lett. 94(2): 79–84, 2005), which states that the expected stabilization time is minimized by the equidistant configuration.  相似文献   

6.
Neural Computing and Applications - Cognitive impairment must be diagnosed in Alzheimer’s disease as early as possible. Early diagnosis allows the person to receive effective treatment...  相似文献   

7.
In this paper we investigate the entanglement nature of quantum states generated by Grover’s search algorithm by means of algebraic geometry. More precisely we establish a link between entanglement of states generated by the algorithm and auxiliary algebraic varieties built from the set of separable states. This new perspective enables us to propose qualitative interpretations of earlier numerical results obtained by M. Rossi et al. We also illustrate our purpose with a couple of examples investigated in details.  相似文献   

8.
In this paper we consider the problem of minimization of deterministic finite automata (DFA) with reference to Hopcroft’s algorithm. Hopcroft’s algorithm has several degrees of freedom, so there can exist different executions that can lead to different sequences of refinements of the set of the states up to the final partition. We find an infinite family of binary automata for which such a process is unique, whatever strategy is chosen. Some recent papers (cf. Berstel and Carton (2004) [3], Castiglione et al. (2008) [6] and Berstel et al. (2009) [1]) have been devoted to find families of automata for which Hopcroft’s algorithm has its worst execution time. They are unary automata associated with circular words. However, automata minimization can be achieved also in linear time when the alphabet has only one letter (cf. Paige et al. (1985) [14]), but such a method does not seem to extend to larger alphabet. So, in this paper we face the tightness of Hopcroft’s algorithm when the alphabet contains more than one letter. In particular we define an infinite family of binary automata representing the worst case of Hopcroft’s algorithm, for each execution. They are automata associated with particular trees and we deepen the connection between the refinement process of Hopcroft’s algorithm and the combinatorial properties of such trees.  相似文献   

9.
10.
Herman’s self-stabilisation algorithm provides a simple randomised solution to the problem of recovering from faults in an N-process token ring. However, a precise analysis of the algorithm’s maximum execution time proves to be surprisingly difficult. McIver and Morgan have conjectured that the worst-case behaviour results from a ring configuration of three evenly spaced tokens, giving an expected time of approximately 0.15N 2. However, the tightest upper bound proved to date is 0.64N 2. We apply probabilistic verification techniques, using the probabilistic model checker PRISM, to analyse the conjecture, showing it to be correct for all sizes of the ring that can be exhaustively analysed. We furthermore demonstrate that the worst-case execution time of the algorithm can be reduced by using a biased coin.  相似文献   

11.
Timely extraction of reliable land cover change information is increasingly needed at a wide continuum of scales. Few methods developed from previous studies have proved to be robust when noise, changes in atmospheric and illumination conditions, and other scene‐ and sensor‐dependent variables are present in the multitemporal images. In this study, we developed a new method based on cross‐correlogram spectral matching (CCSM) with the aim of identifying interannual land cover changes from time‐series Normalized Difference Vegetation Index (NDVI) data. In addition, a new change index is proposed with integration of two parameters that are measured from the cross‐correlogram: the root mean square (RMS) and (1?R max), where R max is the maximum correlation coefficient in a correlogram. Subsequently, a method was proposed to derive the optimal threshold for judging ‘change’ or ‘non‐change’ with the acquired change index. A pilot study was carried out using SPOT VGT‐S images acquired in 1998 and 2000 at Xianghai Park in Jilin Province. The results indicate that CCSM is superior to a traditional Change Vector Analysis (CVA) when noise is present with the data. Because of an error associated with the ground truthing data, a more comprehensive assessment of the developed method is still in process using better ground truthing data and images at a larger time interval. It is worth noting that this method can be applied not only to NDVI datasets but also to other index datasets reflecting surface conditions sampled at different time intervals. In addition, it can be applied to datasets from different satellites without the need to normalize sensor differences.  相似文献   

12.
13.
Dijkstra’s algorithm (DA) is one of the most useful and efficient graph-search algorithms, which can be modified to solve many different problems. It is usually presented as a tool for finding a mapping which, for every vertex v, returns a shortest-length path to v from a fixed single source vertex. However, it is well known that DA returns also a correct optimal mapping when multiple sources are considered and for path-value functions more general than the standard path-length. The use of DA in such general setting can reduce many image processing operations to the computation of an optimum-path forest with path-cost function defined in terms of local image attributes. In this paper, we describe the general properties of a path-value function defined on an arbitrary finite graph which, provably, ensure that Dijkstra’s algorithm indeed returns an optimal mapping. We also provide the examples showing that the properties presented in a 2004 TPAMI paper on the image foresting transform, which were supposed to imply proper behavior of DA, are actually insufficient. Finally, we describe the properties of the path-value function of a graph that are provably necessary for the algorithm to return an optimal mapping.  相似文献   

14.
In this paper, a crack identification approach is presented for detecting crack depth and location in beam-like structures. For this purpose, a new beam element with a single transverse edge crack, in arbitrary position of beam element with any depth, is developed. The crack is not physically modeled within the element, but its effect on the local flexibility of the element is considered by the modification of the element stiffness as a function of crack's depth and position. The development is based on a simplified model, where each crack is substituted by a corresponding linear rotational spring, connecting two adjacent elastic parts. The localized spring may be represented based on linear fracture mechanics theory. The components of the stiffness matrix for the cracked element are derived using the conjugate beam concept and Betti's theorem, and finally represented in closed-form expressions. The proposed beam element is efficiently employed for solving forward problem (i.e., to gain accurate natural frequencies of beam-like structures knowing the cracks’ characteristics). To validate the proposed element, results obtained by new element are compared with two-dimensional (2D) finite element results as well as available experimental measurements. Moreover, by knowing the natural frequencies, an inverse problem is established in which the cracks location and depth are identified. In the inverse approach, an optimization problem based on the new beam element and genetic algorithms (GAs) is solved to search the solution. The proposed approach is verified through various examples on cracked beams with different damage scenarios. It is shown that the present algorithm is able to identify various crack configurations in a cracked beam.  相似文献   

15.
We introduce a hierarchical approach for secure multicast where rekeying of groups of users is made through a method based on Euclid’s algorithm for computing GCD. We consider tree arrangements of users that decrease requirements on bandwidth as protocols of the same nature, but also show that computational requirements are less than in other similar approaches. We also introduce a distributed protocol by groups with group managers that not only helps to decrease size of rekeying messages with respect to a centralized approach, but also to increase the security level concerning authentication of users and distributed information.  相似文献   

16.
We show that under the matrix product state formalism the states produced in Shor’s algorithm can be represented using \(O(\max (4lr^2, 2^{2l}))\) space, where l is the number of bits in the number to factorise and r is the order and the solution to the related order-finding problem. The reduction in space compared to an amplitude formalism approach is significant, allowing simulations as large as 42 qubits to be run on a single processor with 32 GB RAM. This approach is readily adapted to a distributed memory environment, and we have simulated a 45-qubit case using 8 cores with 16 GB RAM in approximately 1 h.  相似文献   

17.
In a ground-breaking paper that appeared in 1983, Ben-Or presented the first randomized algorithm to solve consensus in an asynchronous message-passing system where processes can fail by crashing. Although more efficient randomized algorithms were subsequently proposed, Ben-Or’s algorithm is still the simplest and most elegant one. For this reason, it is often taught in distributed computing courses and it appears in several textbooks. Even though Ben-Or’s algorithm is widely known and it is very simple, surprisingly a proof of correctness of the algorithm has not yet appeared: previously published proofs make some simplifying assumptions—specifically, they either assume that f < n/3 (n is the total number of processes and f is maximum number of processes that may crash) or that the adversary is weak, that is, it cannot see the process states or the content of the messages. In this paper, we present a correctness proof for Ben-Or’s randomized consensus algorithm for the case that f < n/2 process crashes and the adversary is strong (i.e., it can see the process states and message contents, and schedule the process steps and message receipts accordingly). To the best of our knowledge, this is the first full proof of this classical algorithm. We also demonstrate a counterintuitive problem that may occur if one uses the well-known abstraction of a “global coin” to modularize and speed up randomized consensus algorithms, such as Ben-Or’s algorithm. Specifically, we show that contrary to common belief, the use of a global coin can sometimes be deleterious rather than beneficial: instead of speeding up Ben-Or’s algorithm, the use of a global coin in this algorithm may actually prevent termination.  相似文献   

18.
Multimedia Tools and Applications - Infrared imaging frameworks have been broadly utilized as a part of the military and civil fields, for example, target recognition, fault diagnosis, fire...  相似文献   

19.
We explored the reliability of detecting a learner’s affect from conversational features extracted from interactions with AutoTutor, an intelligent tutoring system (ITS) that helps students learn by holding a conversation in natural language. Training data were collected in a learning session with AutoTutor, after which the affective states of the learner were rated by the learner, a peer, and two trained judges. Inter-rater reliability scores indicated that the classifications of the trained judges were more reliable than the novice judges. Seven data sets that temporally integrated the affective judgments with the dialogue features of each learner were constructed. The first four datasets corresponded to the judgments of the learner, a peer, and two trained judges, while the remaining three data sets combined judgments of two or more raters. Multiple regression analyses confirmed the hypothesis that dialogue features could significantly predict the affective states of boredom, confusion, flow, and frustration. Machine learning experiments indicated that standard classifiers were moderately successful in discriminating the affective states of boredom, confusion, flow, frustration, and neutral, yielding a peak accuracy of 42% with neutral (chance = 20%) and 54% without neutral (chance = 25%). Individual detections of boredom, confusion, flow, and frustration, when contrasted with neutral affect, had maximum accuracies of 69, 68, 71, and 78%, respectively (chance = 50%). The classifiers that operated on the emotion judgments of the trained judges and combined models outperformed those based on judgments of the novices (i.e., the self and peer). Follow-up classification analyses that assessed the degree to which machine-generated affect labels correlated with affect judgments provided by humans revealed that human-machine agreement was on par with novice judges (self and peer) but quantitatively lower than trained judges. We discuss the prospects of extending AutoTutor into an affect-sensing ITS.  相似文献   

20.
A clinical expert system has been developed for detection of Parkinson’s Disease (PD). The system extracts features from voice recordings and considers an advanced statistical approach for pattern recognition. The significance of the work lies on the development and use of a novel subject-based Bayesian approach to account for the dependent nature of the data in a replicated measure-based design. The ideas under this approach are conceptually simple and easy-to-implement by using Gibbs sampling. Available information could be included in the model through the prior distribution. In order to assess the performance of the proposed system, a voice recording replication-based experiment has been specifically conducted to discriminate healthy people from people suffering PD. The experiment involved 80 subjects, half of them affected by PD. The proposed system is able to discriminate acceptably well healthy people from people with PD in spite that the experiment has a reduced number of subjects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号