首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
This paper proposes topology design and kinematic optimization of cyclical 5-degree-of-freedom (DoF) parallel manipulator with proper constrained limb. Firstly, a type of cyclical 5-DoF parallel manipulators with proper constrained limb is proposed by analyzing DoF of the proper constrained limb within workspace. Exampled by a cyclical 5-DoF parallel manipulator with the topology 4-UPS&1-RPS, its motion mapping model is formulated. By taking the reciprocal product of a wrench on a twist as the generalized virtual power, the local and global kinematic performance indices are provided. Then, on the basis of the actuated and constrained singularity analysis of the 4-UPS&1-RPS parallel manipulator within the position and pose workspace, the topology design of the manipulator without singularity is carried out, and its reachable and prescribed workspaces are obtained. Finally, by maximizing the global kinematic performance index and subjecting to a set of appropriate constraint conditions, the kinematic optimal design of the 4-UPS&1-RPS parallel manipulator is carried out utilizing the genetic algorithm of MATLAB optimization toolbox.  相似文献   

3.
DAVID KAHN 《Cryptologia》2013,37(1):26-31
This paper was presented for discussion at the Baltimore-Washington German History Seminar, 8 November 1980, Towson State University, Towson, Maryland. It is a slightly revised form of a portion of “Codebreaking in World Wars I and II: The Major Successes and Failures, Their Causes and Their Effects,” The Historical Journal, 23 (September, 1980), 617–639, which lists all sources. The technical reasons given below owe a great deal to the kind help of Dr. Cipher A. Deavonrs.  相似文献   

4.
Two algorithms for solving the piecewise linear least–squares approximation problem of plane curves are presented. The first is for the case when the L 2 residual (error) norm in any segment is not to exceed a pre–assigned value. The second algorithm is for the case when the number of segments is given and a (balanced) L 2 residual norm solution is required. The given curve is first digitized and either algorithm is then applied to the discrete points. For each segment, we obtain the upper triangular matrix R in the QR factorization of the (augmented) coefficient matrix of the resulting system of linear equations. The least–squares solutions are calculated in terms of the R (and Q) matrices. The algorithms then work in an iterative manner by updating the least–squares solutions for the segments via up dating the R matrices. The calculation requires as little computational effort as possible. Numerical results and comments are given. This, in a way, is a tutorial paper.  相似文献   

5.
A freely available data processor for the B asic E RS & ENVISAT ( A )ATSR and M ERIS Toolbox (BEAM) was developed to retrieve atmospheric and oceanic properties above and of Case‐2 waters from Medium Resolution Imaging Spectrometer (MERIS) Level1b data. The processor was especially designed for European coastal waters and uses MERIS Level1b Top‐Of‐Atmosphere (TOA) radiances to retrieve atmospherically corrected remote sensing reflectances at the Bottom‐Of‐Atmosphere (BOA), spectral aerosol optical thicknesses (AOT) and the concentration of three water constituents, namely chlorophyll‐a (CHL), total suspended matter (TSM) and the absorption of yellow substance at 443 nm (YEL). The retrieval is based on four separate artificial neural networks which were trained on the basis of the results of extensive radiative transfer (RT) simulations by taking various atmospheric and oceanic conditions into account. The accuracy of the retrievals was acquired by comparisons with concurrent in situ ground measurements and was published in full detail elsewhere. For the remote sensing reflectance product a mean absolute percentage error (MAPE) of 18% was derived within the spectral range 412.5–708.75 nm while the accuracy of the AOT at 550 nm in terms of MAPE was calculated to be 40%. The accuracies for CHL, TSM and YEL were derived from match‐up analysis with MAPEs of 50%, 60% and 71%, respectively.  相似文献   

6.
Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.  相似文献   

7.
In this paper, structural stiffness analysis of a new 3-axis asymmetric planar parallel manipulator, a 2 P RR–P P R structural kinematic chain, is investigated. The manipulator is proposed as a tool holder for a 5-axis hybrid computer numerical control (CNC) machine. First, the structure of the robot is introduced and inverse kinematics solution is presented. Secondly, stiffness matrix of the robot is determined using a continuous method based on Castigliano’s theorem and calculation of strain energy of the robot components. This method removes the need for commonly used simplifying assumptions and, therefore, results in good accuracy. For this purpose, force and strain energy for each segment of the robot are analyzed. Finally, to verify the analytical results, commercial FEM software is used to simulate the physical structure of the manipulator. A numerical example is presented which confirms the correctness of the analytical formulations.  相似文献   

8.
Book Reviews     
As the results of man-engineered experiments with social design, social “revolution”, socialist “architectures”, and other feats of “social engineering”, are crumbling down, they are causing large-scale human suffering through their failures. There is a renewed awareness that self-organizing and spontaneous properties of complex social systems are much too powerful (and much too vulnerable at the same lime) to respond or be exposed to the endless, reductionistic “tinkering” of policy “makers”, “scientists” of the artificial, and “engineers of human souls”.? ? Geoffrey Vickers, in his Human Systems are Different26 ends up expressing a similar awareness: “It [his book] does not offer a blueprint for the design of human societies. I hope it will weaken the contemporary urge to regard such activities as akin to engineering rather than (at most) to gardening.” View all notes The mankind is again ready to learn how to “trigger”, “catalyze”, “sustain” and “lead-manage” a spontaneous process of social self-organization; it is becoming less inclined to design another “central super-controller”, “information-processing command system”, or “World Brain”.

The purpose of this paper is to show: (1) that theories dealing with “spontaneous social orders” have deep historical roots and (2) that systems sciences are in a good position (better than economics, engineering or sociology) to build upon these roots and expand the theories into useful, practical methodologies. For example, modern theories of autopoiesis and order through fluctuations, especially their rich, computer-based simulation experiments, provide a good and solid point of departure.  相似文献   


9.
Tenenberg, Roth and Socha (2016) documents interaction within a paired programming task. The analysis rests on a conceptualization the authors term “We-awareness.” “We-awareness”, in turn, builds on Tomasello’s notion of “shared intentionality” and through it, upon Clark’s formulation of Common Ground (CG). In this commentary I review the features of CG. I attempt to show that neither Tomasello’s (2014) notion of “shared intentionality” nor Clark’s (1996) model of CG-shared develop an adequate treatment of the sequential emergence of subjective meaning. This is a critical problem for CG and other conceptualizations that build upon it (e.g., “shared intentionality”, “We-awareness”). And it calls into question their usefulness for building an analytic apparatus for studying mutual awareness at the worksite. I suggest that Schütz’s (1953) model of “motive coordination” might serve as a better starting place.  相似文献   

10.
11.
ABSTRACT

The wide use of Notes in business, science, education, news, etc., renders Notes attractive steganographic carriers and allows the communicating parties to establish a covert channel that is capable of transmitting messages in an unsuspicious way. The presented Notes-based Steganography Methodology (Notestega) takes advantage of the recent advances in automatic notetaking techniques to generate a text cover. Notestega neither exploits noise (errors) to embed a message nor produces a detectable noise. Instead, it pursues the variations among both human notes and the outputs of automatic notetaking techniques to conceal data. Virtually, it is accomplished in three steps. First, Notestega generates a number of legitimate various notes of the same input. Second, based on a predetermined protocol it picks a particular note, for example, note number 1, 2, or 5. Third, Notestega substitutes some of the text in the selected note with another text taken from the unpicked notes. Such text substitution is carefully done to avoid the introduction of a suspicious pattern while embedding a message. Unlike machine translation and automatic summarizer, automatic notetaking can embed nondirectly related elements to its output including linguistic elements, for example, sentences, words, or abbreviations, and nonlinguistic elements, for example, lines, stars, arrows, or symbols, and thus the generated note-cover (text-cover) has ample room of concealing data. The presented implementation and steganalysis validation of Notestega demonstrate distinct capabilities of achieving the steganographic goal, adequate room for concealing data, and a superior bitrate to contemporary text steganography approaches, which is roughly 7.777%.  相似文献   

12.
13.
This article presents nearly 10 year's worth of System Usability Scale (SUS) data collected on numerous products in all phases of the development lifecycle. The SUS, developed by Brooke (1996) Brooke, J. 1996. “SUS: A “quick and dirty” usability scale”. In Usability evaluation in industry, Edited by: Jordan, P. W., Thomas, B. A. Weerdmeester and McClelland, I. L. 189194. London: Taylor & Francis.  [Google Scholar], reflected a strong need in the usability community for a tool that could quickly and easily collect a user's subjective rating of a product's usability. The data in this study indicate that the SUS fulfills that need. Results from the analysis of this large number of SUS scores show that the SUS is a highly robust and versatile tool for usability professionals. The article presents these results and discusses their implications, describes nontraditional uses of the SUS, explains a proposed modification to the SUS to provide an adjective rating that correlates with a given score, and provides details of what constitutes an acceptable SUS score.  相似文献   

14.
15.
Clark’s query evaluation procedure for computing negative information in deductive databases using a “negation as failure” inference rule requires a safe computation rule which may only select negative literals if they are ground. This is a very restrictive condition, which weakens the usefulness of negation as failure in a query evaluation procedure. This paper studies the definition and properties of the “not” predicate defined in most Prolog systems which do not enforce the above mentioned condition of a safe computation rule. We show that the negation in clauses and the “not” Predicate of Prolog are not the same. In fact a Prolog program may not be in clause form. An extended query evaluation procedure with an extended safe computation rule is proposed to evaluate queries which involve the “not” predicate. The soundness and completeness of this extended query evaluation procedure with respect to a class of logic programs are proved. The implementation of such an extended query evaluation procedure in a Prolog system can be implemented by a preprocessor for executing range restricted programs and requires no modification to the interpreter/compiler of an existing Prolog system. We compare this proposed extended query evaluation procedure with the extended program proposed by Lloyd and Topor, and the negation constructs in NU-Prolog. The use of the “not” predicate for integrity constraint checking in deductive databases is also presented.  相似文献   

16.
While Moore’s law states that the number of transistors is approximately doubled every 2 years, powering these transistors simultaneously is only possible as long as Dennard scaling continues. Unfortunately, voltage scaling has slowed down in recent years, and microprocessor designers have hit what is known as the “utilization wall” or the “dark silicon” effect. Vectorization, parallelization, specialization and heterogeneity are the key approaches to deal with this utilization wall. However, how software developers can maximize energy efficiency of these architectures remains an open question. This paper presents an energy evaluation of parallelization using both physical and logical cores (i.e., SMT/Hyper-Threading), vectorization (SSE, Advanced Vector Extensions and NEON) and dynamic core reconfiguration [ \(\hbox {Intel}^{\circledR }\) ’s Turbo Boost Technology (TBT)]. The evaluation spans microprocessors for embedded, laptop, desktop and server markets, since there is a convergence among them towards energy efficiency. The analyzed processors include Intel’s Core \(^\mathrm{TM}\) i5 and i7 family and ARM \(^{\circledR }\) ’s Cortex \(^\mathrm{TM}\) A9 and A15. Results show that software developers should prioritize vectorization over thread parallelism when possible, as it yields better energy efficiency, especially on the Intel platforms. Application scalability can be reduced drastically when using vectorization and threading simultaneously since vectorization increases pressure on the memory subsystem. Intel’s TBT further improves energy efficiency by an additional 10–20 % depending on the number of active threads.  相似文献   

17.
The Informational Nature of Personal Identity   总被引:1,自引:0,他引:1  
In this paper, I present an informational approach to the nature of personal identity. In “Plato and the problem of the chariot”, I use Plato’s famous metaphor of the chariot to introduce a specific problem regarding the nature of the self as an informational multiagent system: what keeps the self together as a whole and coherent unity? In “Egology and its two branches” and “Egology as synchronic individualisation”, I outline two branches of the theory of the self: one concerning the individualisation of the self as an entity, the other concerning the identification of such entity. I argue that both presuppose an informational approach, defend the view that the individualisation of the self is logically prior to its identification, and suggest that such individualisation can be provided in informational terms. Hence, in “A reconciling hypothesis: the three membranes model”, I offer an informational individualisation of the self, based on a tripartite model, which can help to solve the problem of the chariot. Once this model of the self is outlined, in “ICTs as technologies of the self” I use it to show how ICTs may be interpreted as technologies of the self. In “The logic of realisation”, I introduce the concept of “realization” (Aristotle’s anagnorisis) and support the rather Spinozian view according to which, from the perspective of informational structural realism, selves are the final stage in the development of informational structures. The final “Conclusion: from the egology to the ecology of the self” briefly concludes the article with a reference to the purposeful shaping of the self, in a shift from egology to ecology.  相似文献   

18.
The use of Genetic Algorithms (GA's) in Economic Simulation Models has been introduced by Holland (Holland, 1988) and justified by pointing out the similarities between Economics, Game Theory, Control Theory and Evolutionary Genetics 1 1 “It is traditional in economics to carry out this characterization by attaching a utility to the various states of the environment. The role of utility in economics is quite similar to the role of payoffin game theory, the error function in control theory, fitness in evolutionary genetics, and so on.” (Holland, 1988, p. 120). Unfortunately, such a justification is not sufficient in the long run, as similarities between different terms do not necessarily justify the transformation of methods and mechanisms from one domain to the other. The justification problem might be regarded as aggravated by the fact that some researchers - among others (Paul, 1993); (Boehme, 1993) - have been using Genetic Algorithms in a straightforward manner within Economic Simulations whereas Holland's original use of GA's is restricted to rule-generation within Classifier Systems. What therefore is required is the introduction of a justification based on economic theory as a base for such simulations.  相似文献   

19.
Baddeley's (1986) Baddeley, A. D. 1986. Working memory, New York: Oxford University Press.  [Google Scholar] working memory model suggests that imagery spatial information and verbal information can be concurrently held in different subsystems. This research proposed a method to present textual information with network relationships in a “graphics + voice” format, especially for small screens. It was hypothesized that this dual-modal presentation would result in superior comprehension performance and higher acceptance than pure textual display. An experiment was carried out to test this hypothesis with analytical problems from the Graduate Record Examination. Thirty individuals participated in this experiment. The results indicate that users' performance and acceptance were improved significantly by using the “graphic + voice” presentation. The article concludes with a discussion of the implications and limitations of the findings for future research in multimodal interface design.  相似文献   

20.
Social Group is group of interconnected nodes interested in obtaining common content (Scott, in Social network analysis, 2012). Social groups are observed in many networks for example, cellular network assisted Device-to-Device network (Fodor et al., in IEEE Commun Mag 50:170–177, 2012, Lei et al., in Wirel Commun 19:96–104, 2012), hybrid Peer-to-Peer content distribution (Christos Gkantsidis and Miller, in 5th International Workshop on Peer-to-Peer Systems, 2006, Vakali and Pallis, in IEEE Internet Comput 7:68–74, 2003) etc. In this paper, we consider a “Social Group” of networked nodes, seeking a “universe” of data segments for maximizing their individual utilities. Each node in social group has a subset of the universe, and access to an expensive link for downloading data. Nodes can also acquire the universe by exchanging copies of data segments among themselves, at low cost, using inter-node links. While exchanges over inter-node links ensure minimum or negligible cost, some nodes in the group try to exploit the system by indulging in collusion, identity fraud etc. We term such nodes as ‘non-reciprocating nodes’ and prohibit such behavior by proposing the “Give-and-Take” criterion, where exchange is allowed iff each participating node provides at least one segment to the node which is unavailable with the node. While complying with this criterion, each node wants to maximize its utility, which depends on the node’s segment set available with the node. Link activation between pair of nodes requires mutual consent of the participating nodes. Each node tries to find a pairing partner by preferentially exploring nodes for link formation. Unpaired nodes download data segments using the expensive link with pre-defined probability (defined as segment aggressiveness probability). We present various linear complexity decentralized algorithms based on the Stable Roommates Problem that can be used by nodes for choosing the best strategy based on available information. We present a decentralized randomized algorithm that is asymptotically optimal in the number of nodes. We define Price of Choice for benchmarking the performance of social groups consisting of non-aggressive nodes (i.e. nodes not downloading data segments from the expensive link) only. We evaluate performances of various algorithms and characterize the behavioral regime that will yield best results for nodes and social groups, spending the least on the expensive link. The proposed algorithms are compared with the optimal. We find that the Link For Sure algorithm performs nearly optimally.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号