首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Tenenberg, Roth and Socha (2016) documents interaction within a paired programming task. The analysis rests on a conceptualization the authors term “We-awareness.” “We-awareness”, in turn, builds on Tomasello’s notion of “shared intentionality” and through it, upon Clark’s formulation of Common Ground (CG). In this commentary I review the features of CG. I attempt to show that neither Tomasello’s (2014) notion of “shared intentionality” nor Clark’s (1996) model of CG-shared develop an adequate treatment of the sequential emergence of subjective meaning. This is a critical problem for CG and other conceptualizations that build upon it (e.g., “shared intentionality”, “We-awareness”). And it calls into question their usefulness for building an analytic apparatus for studying mutual awareness at the worksite. I suggest that Schütz’s (1953) model of “motive coordination” might serve as a better starting place.  相似文献   

2.
The semantics of progressive sentences presents a challenge to linguists and philosophers alike. According to a widely accepted view, the truth-conditions of progressive sentences rely essentially on a notion of inertia. Dowty (Word meaning and Montague grammar: the semantics of verbs and times in generative grammar and in Montague’s PTQ, D. Reidel Publishing Company, Dordrecht, 1979) suggested inertia worlds to implement this “inertia idea” in a formal semantic theory of the progressive. The main thesis of the paper is that the notion of inertia went through a subtle, but crucial change when worlds were replaced by events in Landman (Nat Lang Semant 1:1–32, 1992) and Portner (Language 74(4):760–787, 1998), and that this new, event-related concept of inertia results in a possibility-based theory of the progressive. An important case in point in the paper is a proof that, despite its surface structure, the theory presented in Portner (1998) does not implement the notion of inertia in Dowty (1979); rather, it belongs together with Dowty’s earlier, 1977 theory according to which the progressive is a possibility operator.  相似文献   

3.
The objective of this paper is to focus on one of the “building blocks” of additive manufacturing technologies, namely selective laser-processing of particle-functionalized materials. Following a series of work in Zohdi (Int J Numer Methods Eng 53:1511–1532, 2002; Philos Trans R Soc Math Phys Eng Sci 361(1806):1021–1043, 2003; Comput Methods Appl Mech Eng 193(6–8):679–699, 2004; Comput Methods Appl Mech Eng 196:3927–3950, 2007; Int J Numer Methods Eng 76:1250–1279, 2008; Comput Methods Appl Mech Eng 199:79–101, 2010; Arch Comput Methods Eng 1–17. doi: 10.1007/s11831-013-9092-6, 2013; Comput Mech Eng Sci 98(3):261–277, 2014; Comput Mech 54:171–191, 2014; J Manuf Sci Eng ASME doi: 10.1115/1.4029327, 2015; CIRP J Manuf Sci Technol 10:77–83, 2015; Comput Mech 56:613–630, 2015; Introduction to computational micromechanics. Springer, Berlin, 2008; Introduction to the modeling and simulation of particulate flows. SIAM (Society for Industrial and Applied Mathematics), Philadelphia, 2007; Electromagnetic properties of multiphase dielectrics: a primer on modeling, theory and computation. Springer, Berlin, 2012), a laser-penetration model, in conjunction with a Finite Difference Time Domain Method using an immersed microstructure method, is developed. Because optical, thermal and mechanical multifield coupling is present, a recursive, staggered, temporally-adaptive scheme is developed to resolve the internal microstructural fields. The time step adaptation allows the numerical scheme to iteratively resolve the changing physical fields by refining the time-steps during phases of the process when the system is undergoing large changes on a relatively small time-scale and can also enlarge the time-steps when the processes are relatively slow. The spatial discretization grids are uniform and dense enough to capture fine-scale changes in the fields. The microstructure is embedded into the spatial discretization and the regular grid allows one to generate a matrix-free iterative formulation which is amenable to rapid computation, with minimal memory requirements, making it ideal for laptop computation. Numerical examples are provided to illustrate the modeling and simulation approach, which by design, is straightforward to computationally implement, in order to be easily utilized by researchers in the field. More advanced conduction models, based on thermal-relaxation, which are a key feature of fast-pulsing laser technologies, are also discussed.  相似文献   

4.
Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations (Parker in Synthese 169(3):483–496, 2009; Morrison in Philos Stud 143(1):33–57, 2009), the nature of computer data (Barberousse and Vorms, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold (eds) Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of computer simulations (Krohs in Int Stud Philos Sci 22(3):277–292, 2008; Durán in Int Stud Philos Sci 31(1):27–45, 2017). The aim of this article is to show that these authors are right in assuming that results of computer simulations are to be trusted when computer simulations are reliable processes. After a short reconstruction of the problem of epistemic opacity, the article elaborates extensively on computational reliabilism, a specified form of process reliabilism with computer simulations located at the center. The article ends with a discussion of four sources for computational reliabilism, namely, verification and validation, robustness analysis for computer simulations, a history of (un)successful implementations, and the role of expert knowledge in simulations.  相似文献   

5.
Social Group is group of interconnected nodes interested in obtaining common content (Scott, in Social network analysis, 2012). Social groups are observed in many networks for example, cellular network assisted Device-to-Device network (Fodor et al., in IEEE Commun Mag 50:170–177, 2012, Lei et al., in Wirel Commun 19:96–104, 2012), hybrid Peer-to-Peer content distribution (Christos Gkantsidis and Miller, in 5th International Workshop on Peer-to-Peer Systems, 2006, Vakali and Pallis, in IEEE Internet Comput 7:68–74, 2003) etc. In this paper, we consider a “Social Group” of networked nodes, seeking a “universe” of data segments for maximizing their individual utilities. Each node in social group has a subset of the universe, and access to an expensive link for downloading data. Nodes can also acquire the universe by exchanging copies of data segments among themselves, at low cost, using inter-node links. While exchanges over inter-node links ensure minimum or negligible cost, some nodes in the group try to exploit the system by indulging in collusion, identity fraud etc. We term such nodes as ‘non-reciprocating nodes’ and prohibit such behavior by proposing the “Give-and-Take” criterion, where exchange is allowed iff each participating node provides at least one segment to the node which is unavailable with the node. While complying with this criterion, each node wants to maximize its utility, which depends on the node’s segment set available with the node. Link activation between pair of nodes requires mutual consent of the participating nodes. Each node tries to find a pairing partner by preferentially exploring nodes for link formation. Unpaired nodes download data segments using the expensive link with pre-defined probability (defined as segment aggressiveness probability). We present various linear complexity decentralized algorithms based on the Stable Roommates Problem that can be used by nodes for choosing the best strategy based on available information. We present a decentralized randomized algorithm that is asymptotically optimal in the number of nodes. We define Price of Choice for benchmarking the performance of social groups consisting of non-aggressive nodes (i.e. nodes not downloading data segments from the expensive link) only. We evaluate performances of various algorithms and characterize the behavioral regime that will yield best results for nodes and social groups, spending the least on the expensive link. The proposed algorithms are compared with the optimal. We find that the Link For Sure algorithm performs nearly optimally.  相似文献   

6.
Kurt Gödel’s Incompleteness theorem is well known in Mathematics/Logic/Philosophy circles. Gödel was able to find a way for any given P (UTM), (read as, “P of UTMforProgram of Universal Truth Machine”), actually to write down a complicated polynomial that has a solution iff (=if and only if), G is true, where G stands for a Gödel-sentence. So, if G’s truth is a necessary condition for the truth of a given polynomial, then P (UTM) has to answer first that G is true in order to secure the truth of the said polynomial. But, interestingly, P (UTM) could never answer that G was true. This necessarily implies that there is at least one truth a P (UTM), however large it may be, cannot speak out. Daya Krishna and Karl Potter’s controversy regarding the construal of India’s Philosophies dates back to the time of Potter’s publication of “Presuppositions of India’s Philosophies” (1963, Englewood Cliffs Prentice-Hall Inc.) In attacking many of India’s philosophies, Daya Krishna appears to have unwittingly touched a crucial point: how can there be the knowledge of a ‘non-cognitive’ mok?a? [‘mok?a’ is the final state of existence of an individual away from Social Contract]—See this author’s Indian Social Contract and its Dissolution (2008) mok?a does not permit the knowledge of one’s own self in the ordinary way with threefold distinction, i.e., subject–knowledge-object or knower–knowledge–known. But what is important is to demonstrate whether such ‘knowledge’ of non-cognitive mok?a state can be logically shown, in a language, to be possible to attain, and that there is no contradiction involved in such demonstration, because, no one can possibly express the ‘experience-itself’ in language. Hence, if such ‘knowledge’ can be shown to be logically not impossible in language, then, not only Daya Krishna’s arguments against ‘non-cognitive mok?a’ get refuted but also it would show the possibility of achieving ‘completeness’ in its truest sense, as opposed to Gödel’s ‘Incompleteness’. In such circumstances, man would himself become a Universal Truth Machine. This is because the final state of mok?a is construed as the state of complete knowledge in Advaita. This possibility of ‘completeness’ is set in this paper in the backdrop of ?rī ?a?karācārya’s Advaitic (Non-dualistic) claim involved in the mahāvākyas (extra-ordinary propositions). (Mahāvākyas that ?a?kara refers to are basically taken from different Upani?ads. For example, “Aham Brahmāsmi” is from B?hadāra?yaka Upanisad, and “Tattvamasi” is from Chāndogya Upani?ad. ?rī ?a?karācārya has written extensively. His main works include his Commentary on Brahma-Sūtras, on major Upani?ads, and on ?rīmadBhagavadGītā, called Bhā?yas of them, respectively. Almost all these works are available in English translation published by Advaita Ashrama, 5 Dehi Entally Road, Calcutta, 700014.) On the other hand, the ‘Incompleteness’ of Gödel is due to the intervening G-sentence, which has an adverse self-referential element. Gödel’s incompleteness theorem in its mathematical form with an elaborate introduction by R.W. Braithwaite can be found in Meltzer (Kurt Gödel: on formally undecidable propositions of principia mathematica and related systems. Oliver &; Boyd, Edinburgh, 1962). The present author believes first that semantic content cannot be substituted by any amount of arithmoquining, (Arithmoquining or arithmatization means, as Braithwaite says,—“Gödel’s novel metamathematical method is that of attaching numbers to the signs, to the series of signs (formulae) and to the series of series of signs (“proof-schemata”) which occur in his formal system…Gödel invented what might be called co-ordinate metamathematics…”) Meltzer (1962 p. 7). In Antone (2006) it is said “The problem is that he (Gödel) tries to replace an abstract version of the number (which can exist) with the concept of a real number version of that abstract notion. We can state the abstraction of what the number needs to be, [the arithmoquining of a number cannot be a proof-pair and an arithmoquine] but that is a concept that cannot be turned into a specific number, because by definition no such number can exist.”.), especially so where first-hand personal experience is called for. Therefore, what ultimately rules is the semanticity as in a first-hand experience. Similar points are voiced, albeit implicitly, in Antone (Who understands Gödel’s incompleteness theorem, 2006). (“…it is so important to understand that Gödel’s theorem only is true with respect to formal systems—which is the exact opposite of the analogous UTM (Antone (2006) webpage 2. And galatomic says in the same discussion chain that “saying” that it ((is)) only true for formal systems is more significant… We only know the world through “formal” categories of understanding… If the world as it is in itself has no incompleteness problem, which I am sure is true, it does not mean much, because that is not the world of time and space that we experience. So it is more significant that formal systems are incomplete than the inexperiencable ‘World in Itself’ has no such problem.—galatomic”) Antone (2006) webpage 2. Nevertheless galatomic certainly, but unwittingly succeeds in highlighting the possibility of experiencing the ‘completeness’ Second, even if any formal system including the system of Advaita of ?a?kara is to be subsumed or interpreted under Gödel’s theorem, or Tarski’s semantic unprovability theses, the ultimate appeal would lie to the point of human involvement in realizing completeness since any formal system is ‘Incomplete’ always by its very nature as ‘objectual’, and fails to comprehend the ‘subject’ within its fold.  相似文献   

7.
This research explores and evaluates the contribution that facial expressions might have regarding improved comprehension and acceptability in sign language avatars. Focusing specifically on Irish sign language (ISL), the Deaf (the uppercase “D” in the word “Deaf” indicates Deaf as a culture as opposed to “deaf” as a medical condition) community’s responsiveness to sign language avatars is examined. The hypothesis of this is as follows: augmenting an existing avatar with the seven widely accepted universal emotions identified by Ekman (Basic emotions: handbook of cognition and emotion. Wiley, London, 2005) to achieve underlying facial expressions will make that avatar more human like and improve usability and understandability for the ISL user. Using human evaluation methods (Huenerfauth et al. in Trans Access Comput (ACM) 1:1, 2008), an augmented set of avatar utterances is compared against a baseline set, focusing on two key areas: comprehension and naturalness of facial configuration. The approach to the evaluation including the choice of ISL participants, interview environment and evaluation methodology is then outlined. The evaluation results reveal that in a comprehension test there was little difference between the baseline avatars and those augmented with emotional facial expression. It was also found that the avatars are lacking various linguistic attributes.  相似文献   

8.
Recursive partitioning methods are among the most popular techniques in machine learning. The purpose of this paper is to investigate how to adapt this methodology to the bipartite ranking problem. Following in the footsteps of the TreeRank approach developed in Clémençon and Vayatis (Proceedings of the 2008 Conference on Algorithmic Learning Theory, 2008 and IEEE Trans. Inf. Theory 55(9):4316–4336, 2009), we present tree-structured algorithms designed for learning to rank instances based on classification data. The main contributions of the present work are the following: the practical implementation of the TreeRank algorithm, well-founded solutions to the crucial issues related to the splitting rule and the choice of the “right” size for the ranking tree. From the angle embraced in this paper, splitting is viewed as a cost-sensitive classification task with data-dependent cost. Hence, up to straightforward modifications, any classification algorithm may serve as a splitting rule. Also, we propose to implement a cost-complexity pruning method after the growing stage in order to produce a “right-sized” ranking sub-tree with large AUC. In particular, performance bounds are established for pruning schemes inspired by recent work on nonparametric model selection. Eventually, we propose indicators for variable importance and variable dependence, plus various simulation studies illustrating the potential of our method.  相似文献   

9.
XGC1 and M3D-C 1 are two fusion plasma simulation codes being developed at Princeton Plasma Physics Laboratory. XGC1 uses the particle-in-cell method to simulate gyrokinetic neoclassical physics and turbulence (Chang et al. Phys Plasmas 16(5):056108, 2009; Ku et al. Nucl Fusion 49:115021, 2009; Admas et al. J Phys 180(1):012036, 2009). M3D-\(C^1\) solves the two-fluid resistive magnetohydrodynamic equations with the \(C^1\) finite elements (Jardin J comput phys 200(1):133–152, 2004; Jardin et al. J comput Phys 226(2):2146–2174, 2007; Ferraro and Jardin J comput Phys 228(20):7742–7770, 2009; Jardin J comput Phys 231(3):832–838, 2012; Jardin et al. Comput Sci Discov 5(1):014002, 2012; Ferraro et al. Sci Discov Adv Comput, 2012; Ferraro et al. International sherwood fusion theory conference, 2014). This paper presents the software tools and libraries that were combined to form the geometry and automatic meshing procedures for these codes. Specific consideration has been given to satisfy the mesh configuration and element shape quality constraints of XGC1 and M3D-\(C^1\).  相似文献   

10.
As a framework for simple but basic statistical inference problems we introduce the genetic Most Likely Solution problem, the task of finding a most likely solution (MLS in short) for a given problem instance under some given probability model. Although many MLS problems are NP-hard, we propose for these problems, to study their average-case complexity under their assumed probability models. We show three examples of MLS problems, and show that “message passing algorithms” (e.g., belief propagation) work reasonably well for these problems. Some of the technical results of this paper are from the author’s recent work (Watanabe and Yamamoto in Lecture Notes in Computer Science, vol. 4142, pp. 277–282, 2006, and Onsjö and Watanabe in Lecture Notes in Computer Science, vol. 4288, pp. 507–516, 2006).  相似文献   

11.
Chemical reaction network has been a model of interest to both theoretical and applied computer scientists, and there has been concern about its physical-realisticity which calls for study on the atomic property of chemical reaction networks. Informally, a chemical reaction network is “atomic” if each reaction may be interpreted as the rearrangement of indivisible units of matter. There are several reasonable definitions formalizing this idea. We investigate the computational complexity of deciding whether a given network is atomic according to each of these definitions. Primitive atomic, which requires each reaction to preserve the total number of atoms, is shown to be equivalent to mass conservation. Since it is known that it can be decided in polynomial time whether a given chemical reaction network is mass-conserving (Mayr and Weihmann, in: International conference on applications and theory of petri nets and concurrency, Springer, New York, 2014), the equivalence we show gives an efficient algorithm to decide primitive atomicity. Subset atomic further requires all atoms be species, so intuitively this type of network is endowed with a “better” property than primitive atomic (i.e. mass conserving) ones in the sense that the atoms are not just abstract indivisible units, but also actual participants of reactions. We show that deciding if a network is subset atomic is in \({\mathsf{NP}}\), and “whether a network is subset atomic with respect to a given atom set” is strongly \({\mathsf{NP}}\)-\({\mathsf {complete}}\). Reachably atomic, studied by Adleman et al. (On the mathematics of the law of mass action, Springer, Dordrecht, 2014.  https://doi.org/10.1007/978-94-017-9041-3_1), and Gopalkrishnan (2016), further requires that each species has a sequence of reactions splitting it into its constituent atoms. Using a combinatorial argument, we show that there is a polynomial-time algorithm to decide whether a given network is reachably atomic, improving upon the result of Adleman et al. that the problem is decidable. We show that the reachability problem for reachably atomic networks is \({\mathsf {PSPACE}}\)-\({\mathsf {complete}}\). Finally, we demonstrate equivalence relationships between our definitions and some cases of an existing definition of atomicity due to Gnacadja (J Math Chem 49(10):2137, 2011).  相似文献   

12.
In climate-economic modelling, agent-based models are still an exception. Although numerous authors have discussed the usefulness of the approach, only a few models exist. The paper proposes an update to a multi-agent climate-economic model, namely the “battle of perspectives” (Janssen, 1996; Janssen and de Vries 1998). The approach of the paper is twofold. First, the reimplementation of the model follows the “model to model” concept. Supporters of the approach argue that replication is a useful way to check a model’s accuracy and robustness. Second, updating a model with current data and new scientific evidence is a robustness check in itself. The long-term validity and usefulness of a model depends on the variability of the data on which it is based, as well as on the model’s sensitivity to data changes. By offering this update, the paper contributes to the development of agent-based models in climate-economics. Acknowledging evolutionary processes in climate-policy represents a useful complement to intertemporal cost-benefit analyses, the latter of which derive optimal protection paths but are not able to explain why people do not follow them. Since the replication and update succeeded, the paper recommends using the model as a basis for further analysis.  相似文献   

13.
In this paper we study an economy with a high degree of financialization in which (non-financial) firms need loans from commercial banks to finance production, service debt, and make long-term investments. Along the business cycle, the economy follows a Minsky base cycle with firms traversing through the various stages of financial fragility, i.e. hedge, speculative and Ponzi finance (cf., Minsky in The financial instability hypothesis: a restatement. Hyman P Minsky archive paper, vol 180, pp 541–552, 1978; Stabilizing an unstable economy. Yale University Press, 2nd edn 2008, McGraw-Hill, New York, 1986; The financial instability hypothesis. Economics working paper archive wp74. The Jerome Levy Economics Institute of Bard College, 1992). In the speculative financial stage cash flows are insufficient to finance the repayment of principle but sufficient for paying interest, so banks are willing to roll-over credits in order to prevent loan defaults. In the Ponzi financial position even interest payments cannot be served, but banks my still be willing to keep firms alive through “extend and pretend” loans, also known as zombie-lending (Caballero et al. in Am Econ Rev 98(5):1943–1977, 2008). This lending behavior may cause credit bubbles with increasing leverage ratios. Empirical evidence suggests that recessions following such leveraging booms are more severe and can be associated to higher economic costs (Jordà et al. in J Money Credit Bank 45(s2):3–28, 2013; Schularick and Taylor in Am Econ Rev 102(2):1029–1061, 2012). We study macroprudential regulations aimed at: (i) the prevention and mitigation of credit bubbles, (ii) ensuring macro-financial stability, and (iii) limiting the ability of banks to create unsustainable debt bubbles. Our results show that limiting the credit growth by using a non-risk-weighted capital ratio has slightly positive effects, while using loan eligibility criteria such as cutting off funding to all financially unsound firms (speculative and Ponzi) has strong positive effects.  相似文献   

14.
We introduce a family of generalized prolate spheroidal wave functions (PSWFs) of order \(-1,\) and develop new spectral schemes for second-order boundary value problems. Our technique differs from the differentiation approach based on PSWFs of order zero in Kong and Rokhlin (Appl Comput Harmon Anal 33(2):226–260, 2012); in particular, our orthogonal basis can naturally include homogeneous boundary conditions without the re-orthogonalization of Kong and Rokhlin (2012). More notably, it leads to diagonal systems or direct “explicit” solutions to 1D Helmholtz problems in various situations. Using a rule optimally pairing the bandwidth parameter and the number of basis functions as in Kong and Rokhlin (2012), we demonstrate that the new method significantly outperforms the Legendre spectral method in approximating highly oscillatory solutions. We also conduct a rigorous error analysis of this new scheme. The idea and analysis can be extended to generalized PSWFs of negative integer order for higher-order boundary value and eigenvalue problems.  相似文献   

15.
Tempered fractional diffusion equations (TFDEs) involving tempered fractional derivatives on the whole space were first introduced in Sabzikar et al. (J Comput Phys 293:14–28, 2015), but only the finite-difference approximation to a truncated problem on a finite interval was proposed therein. In this paper, we rigorously show the well-posedness of the models in Sabzikar et al. (2015), and tackle them directly in infinite domains by using generalized Laguerre functions (GLFs) as basis functions. We define a family of GLFs and derive some useful formulas of tempered fractional integrals/derivatives. Moreover, we establish the related GLF-approximation results. In addition, we provide ample numerical evidences to demonstrate the efficiency and “tempered” effect of the underlying solutions of TFDEs.  相似文献   

16.
The Chinese stock market has a large ratio of retail investors, which is significantly different from the stock markets in the US and Europe. We have known that momentum profits exist in the latter by applying Jegadeesh and Titman’s (J Financ 48:65–91, 1993) model with 6-month formation and holding periods. However, there are only a few studies on momentum profits in China. Therefore, this study examines whether the Shanghai and Shenzhen stock markets produce momentum profits. We find that these two markets have significant contrarian but not momentum profits. We also create an “artificial momentum” portfolio and follow Bhattacharya et al. (Account Rev 78:641–678, 2003) to compute the transparency indices. Our outcomes show that the corporate transparencies of the winners (losers) in the artificial momentum portfolios are close to those in the commonly-defined momentum portfolios. The averages of the decile transparencies are between 4.5 and 6.5, not only for the top 10% of winners but also for the bottom 10% of losers. According to these results, we suggest that financial transparency is irrelevant to the inertia and reversal of stock prices in the Shanghai and Shenzhen stock markets.  相似文献   

17.
It has been just over 100 years since the birth of Alan Turing and more than 65 years since he published in Mind his seminal paper, Computing Machinery and Intelligence (Turing in Computing machinery and intelligence. Oxford University Press, Oxford, 1950). In the Mind paper, Turing asked a number of questions, including whether computers could ever be said to have the power of “thinking” (“I propose to consider the question, Can computers think?” ...Alan Turing, Computing Machinery and Intelligence, Mind, 1950). Turing also set up a number of criteria—including his imitation game—under which a human could judge whether a computer could be said to be “intelligent”. Turing’s paper, as well as his important mathematical and computational insights of the 1930s and 1940s led to his popular acclaim as the “Father of Artificial Intelligence”. In the years since his paper was published, however, no computational system has fully satisfied Turing’s challenge. In this paper we focus on a different question, ignored in, but inspired by Turing’s work: How might the Artificial Intelligence practitioner implement “intelligence” on a computational device? Over the past 60 years, although the AI community has not produced a general-purpose computational intelligence, it has constructed a large number of important artifacts, as well as taken several philosophical stances able to shed light on the nature and implementation of intelligence. This paper contends that the construction of any human artifact includes an implicit epistemic stance. In AI this stance is found in commitments to particular knowledge representations and search strategies that lead to a product’s successes as well as its limitations. Finally, we suggest that computational and human intelligence are two different natural kinds, in the philosophical sense, and elaborate on this point in the conclusion.  相似文献   

18.
Intuitionistic fuzzy set is capable of handling uncertainty with counterpart falsities which exist in nature. Proximity measure is a convenient way to demonstrate impractical significance of values of memberships in the intuitionistic fuzzy set. However, the related works of Pappis (Fuzzy Sets Syst 39(1):111–115, 1991), Hong and Hwang (Fuzzy Sets Syst 66(3):383–386, 1994), Virant (2000) and Cai (IEEE Trans Fuzzy Syst 9(5):738–750, 2001) did not model the measure in the context of the intuitionistic fuzzy set but in the Zadeh’s fuzzy set instead. In this paper, we examine this problem and propose new notions of δ-equalities for the intuitionistic fuzzy set and δ-equalities for intuitionistic fuzzy relations. Two fuzzy sets are said to be δ-equal if they are equal to an extent of δ. The applications of δ-equalities are important to fuzzy statistics and fuzzy reasoning. Several characteristics of δ-equalities that were not discussed in the previous works are also investigated. We apply the δ-equalities to the application of medical diagnosis to investigate a patient’s diseases from symptoms. The idea is using δ-equalities for intuitionistic fuzzy relations to find groups of intuitionistic fuzzified set with certain equality or similar degrees then combining them. Numerical examples are given to illustrate validity of the proposed algorithm. Further, we conduct experiments on real medical datasets to check the efficiency and applicability on real-world problems. The results obtained are also better in comparison with 10 existing diagnosis methods namely De et al. (Fuzzy Sets Syst 117:209–213, 2001), Samuel and Balamurugan (Appl Math Sci 6(35):1741–1746, 2012), Szmidt and Kacprzyk (2004), Zhang et al. (Procedia Eng 29:4336–4342, 2012), Hung and Yang (Pattern Recogn Lett 25:1603–1611, 2004), Wang and Xin (Pattern Recogn Lett 26:2063–2069, 2005), Vlachos and Sergiadis (Pattern Recogn Lett 28(2):197–206, 2007), Zhang and Jiang (Inf Sci 178(6):4184–4191, 2008), Maheshwari and Srivastava (J Appl Anal Comput 6(3):772–789, 2016) and Support Vector Machine (SVM).  相似文献   

19.
What does it take to implement a computer? Answers to this question have often focused on what it takes for a physical system to implement an abstract machine. As Joslin (Minds Mach 16:29–41, 2006) observes, this approach neglects cases of software implementation—cases where one machine implements another by running a program. These cases, Joslin argues, highlight serious problems for mapping accounts of computer implementation—accounts that require a mapping between elements of a physical system and elements of an abstract machine. The source of these problems is the complexity introduced by common design features of ordinary computers, features that would be relevant to any real-world software implementation (e.g., virtual memory). While Joslin is focused on contemporary views, his discussion also suggests a counterexample to recent mapping accounts which hold that genuine implementation requires simple mappings (Millhouse in Br J Philos Sci, 2017.  https://doi.org/10.1093/bjps/axx046; Wallace in The emergent multiverse, Oxford University Press, Oxford, 2014). In this paper, I begin by clarifying the nature of software implementation and disentangling it from closely related phenomena like emulation and simulation. Next, I argue that Joslin overstates the degree of complexity involved in his target cases and that these cases may actually give us reasons to favor simplicity-based criteria over relevant alternatives. Finally, I propose a novel problem for simplicity-based criteria and suggest a tentative solution.  相似文献   

20.
The aim of Content-based Image Retrieval (CBIR) is to find a set of images that best match the query based on visual features. Most existing CBIR systems find similar images in low level features, while Text-based Image Retrieval (TBIR) systems find images with relevant tags regardless of contents in the images. Generally, people are more interested in images with similarity both in contours and high-level concepts. Therefore, we propose a new strategy called Iterative Search to meet this requirement. It mines knowledge from the similar images of original queries, in order to compensate for the missing information in feature extraction process. To evaluate the performance of Iterative Search approach, we apply this method to four different CBIR systems (HOF Zhou et al. in ACM international conference on multimedia, 2012; Zhou and Zhang in Neural information processing—international conference, ICONIP 2011, Shanghai, 2011, HOG Dalal and Triggs in IEEE computer society conference on computer vision pattern recognition, 2005, GIST Oliva and Torralba in Int J Comput Vision 42:145–175, 2001 and CNN Krizhevsky et al. in Adv Neural Inf Process Syst 25:2012, 2012) in our experiments. The results show that Iterative Search improves the performance of original CBIR features by about \(20\%\) on both the Oxford Buildings dataset and the Object Sketches dataset. Meanwhile, it is not restricted to any particular visual features.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号