首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In a very recent paper, Peng and Liu (Neural Comput Appl 20:543–547, 2011) investigated the pth moment stability of the stochastic Grossberg–Hopfield neural networks with Markov volatilities by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). We should point out that Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) investigated the pth moment exponentially stable for a class of stochastic dynamical systems with constant delay; however, this theorem cannot apply to the case of variable time delays. It is also worthy to emphasize that Peng and Liu (Neural Comput Appl 20:543–547, 2011) discussed by Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1) the pth moment exponentially stable for the Grossberg–Hopfield neural networks with variable delays, and therefore, there are some gaps between Peng and Liu (Neural Comput Appl 20:543–547, 2011, Theorem 1) and Mao et al. (Bernoulli 6:73–90, 2000, Theorem 4.1). In this paper, we fill up this gap. Moreover, a numerical example is also provided to demonstrate the effectiveness and applicability of the theoretical results.  相似文献   

3.
Samal and Henderson claim that any parallel algorithm for enforcing arc consistency in the worst case must have (na) sequential steps, wheren is the number of nodes, anda is the number of labels per node. We argue that Samal and Henderson's argument makes assumptions about how processors are used and give a counterexample that enforces arc consistency in a constant number of steps usingO(n[su2a22na) processors. It is possible that the lower bound holds for a polynomial number of processors; if such a lower bound were to be proven it would answer an important open question in theoretical computer science concerning the relation between the complexity classesP andNC. The strongest existing lower bound for the arc consistency problem states that it cannot be solved in polynomial log time unlessP=NC.  相似文献   

4.
I begin by tracing some of the confusions regarding levels and reduction to a failure to distinguish two different principles according to which theories can be viewed as hierarchically arranged — epistemic authority and ontological constitution. I then argue that the notion of levels relevant to the debate between symbolic and connectionist paradigms of mental activity answers to neither of these models, but is rather correlative to the hierarchy of functional decompositions of cognitive tasks characteristic of homuncular functionalism. Finally, I suggest that the incommensurability of the intentional and extensional vocabularies constitutes a strongprima facie reason to conclude that there is little likelihood of filling in the story of Bechtel's missing level in such a way as to bridge the gap between such homuncular functionalism and his own model of mechanistic explanation.  相似文献   

5.
In their joint paper entitled “The Replication of the Hard Problem of Consciousness in AI and BIO-AI” (Boltuc et al. Replication of the hard problem of conscious in AI and Bio- AI: An early conceptual framework 2008), Nicholas and Piotr Boltuc suggest that machines could be equipped with phenomenal consciousness, which is subjective consciousness that satisfies Chalmer’s hard problem (We will abbreviate the hard problem of consciousness as “H-consciousness”). The claim is that if we knew the inner workings of phenomenal consciousness and could understand its’ precise operation, we could instantiate such consciousness in a machine. This claim, called the extra-strong AI thesis, is an important claim because if true it would demystify the privileged access problem of first-person consciousness and cast it as an empirical problem of science and not a fundamental question of philosophy. A core assumption of the extra-strong AI thesis is that there is no logical argument that precludes the implementation of H-consciousness in an organic or in-organic machine provided we understand its algorithm. Another way of framing this conclusion is that there is nothing special about H-consciousness as compared to any other process. That is, in the same way that we do not preclude a machine from implementing photosynthesis, we also do not preclude a machine from implementing H-consciousness. While one may be more difficult in practice, it is a problem of science and engineering, and no longer a philosophical question. I propose that Boltuc’s conclusion, while plausible and convincing, comes at a very high price; the argument given for his conclusion does not exclude any conceivable process from machine implementation. In short, if we make some assumptions about the equivalence of a rough notion of algorithm and then tie this to human understanding, all logical preconditions vanish and the argument grants that any process can be implemented in a machine. The purpose of this paper is to comment on the argument for his conclusion and offer additional properties of H-consciousness that can be used to make the conclusion falsifiable through scientific investigation rather than relying on the limits of human understanding.  相似文献   

6.
Microsystem Technologies - These commentaries show that the heat generation/absorption parameter (γ1) is dimensionless only if the parameter (m) = 1, which means constant surface...  相似文献   

7.
8.
Sharma et al. have investigated the performance of two-layered fractional order fuzzy logic controller (TL-FOFLC) for a 2-link rigid planer robotic manipulator with payload. In this work, the performance of TL-FOFLC has been compared with two-layered FLC (TL-FLC), single-layered FLC (SL-FLC) and the conventional proportional-integral-derivative (PID) controllers, for trajectory tracking, model uncertainties and disturbance rejection. In this comment, it is pointed out that this work has several missing essential parameters, and therefore, it is not possible for the reader to validate all the claimed results of Sharma et al. (2016). Six numerical values, three gains for each of the used two PID controllers are found to be unreported in addition to the six gains for each of the used two SL-FLCs. Since the performances of the PIDs and the SL-FLCs are highly dependent on their tuned gains it is concluded that the reported performances of these controllers cannot be validated.  相似文献   

9.
10.
The performance of a non-linear model-reference adaptive control system and of a classical linear control system are compared. The problem of a speed control loop for a DC drive system with a variable moment of inertia load is considered for the comparison exercise. The characteristics of the two control schemes are compared when (a) the plant input saturates, (6) step changes in the load moment of inertia occur, (c) parameter variations exceed the design assumptions and (d) external disturbance signals are applied to the output.  相似文献   

11.
12.
In this short comment paper, it is proved that the unique theorem of [1] is not valid.  相似文献   

13.
This note extends the discussion of the above paper with respect to the finite time integration method of data preparation for continuous time system identification of linear systems.  相似文献   

14.

Formal concept analysis is a method of exploratory data analysis that aims at the extraction of natural clusters from object-attribute data tables. The clusters, called formal concepts, are naturally interpreted as human-perceived concepts in a traditional sense and can be partially ordered by a subconcept-superconcept hierarchy. The hierarchical structure of formal concepts (so-called concept lattice) represents a structured information obtained automatically from the input data table. The present paper focuses on the analysis of input data with a predefined hierarchy on attributes thus extending the basic approach of formal concept analysis. The motivation of the present approach derives from the fact that very often, people (consciously or unconsciously) attach various importance to attributes which is then reflected in the conceptual classification based on these attributes. We define the notion of a formal concept respecting the attribute hierarchy. Formal concepts which do not respect the hierarchy are considered not relevant. Elimination of the non-relevant concepts leads to a reduced set of extracted concepts making the discovered structure of hidden concepts more comprehensible. We present basic formal results on our approach as well as illustrating examples.  相似文献   

15.
16.
17.
Design studies are an integral method of visualization research with hundreds of instances in the literature. Although taught as a theory, the practical implementation of design studies is often excluded from visualization pedagogy due to the lengthy time commitments associated with such studies. Recent research has addressed this challenge and developed an expedited design study framework, the Design Study “Lite” Methodology (DSLM), which can implement design studies with novice students within just 14 weeks. The framework was developed and evaluated based on five semesters of in-person data visualization courses with 30 students or less and was implemented in conjunction with Service-Learning (S-L). With the growth and popularity of the data visualization field—and the teaching environment created by the COVID-19 pandemic—more academic institutions are offering visualization courses online. Therefore, in this paper, we strengthen and validate the epistemological foundations of the DSLM framework by testing its (1) adaptability to online learning environments and conditions and (2) scalability to larger classes with up to 57 students. We present two online implementations of the DSLM framework, with and without Service-Learning (S-L), to test the adaptability and scalability of the framework. We further demonstrate that the framework can be applied effectively without the S-L component. We reflect on our experience with the online DSLM implementations and contribute a detailed retrospective analysis using thematic analysis and grounded theory methods to draw valuable recommendations and guidelines for future applications of the framework. This work verifies that DSLM can be used successfully in online classes to teach design study methodology. Finally, we contribute novel additions to the DSLM framework to further enhance it for teaching and learning design studies in the classroom. The preprint and supplementary materials for this paper can be found at https://osf.io/6bjx5/.  相似文献   

18.
Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized.  相似文献   

19.
20.
More often than not, a new comer to computer science research such as an undergraduate or graduate student would naturally ask for introductory reading on the culture and philosophy in computer science. The book “Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists” is a nice book for them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号