首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Applications of bicriteria linear programming in classifications and selections are discussed. The use of the bicriteria classification and selection model is presented in applications of genotype selection in the corn breeding problem and feature selection in a pattern recognition problem.  相似文献   

2.
The conventional high level programming languages being used in physics, most notably FORTRAN, combine a high degree of insecurity with a lack of tools. With more modern languages, like Pascal or Ada, a toolset is quite often supplied for the development and analysis of programs.Most useful are a traceback- and a symbolic dump facility, cross-reference generators, either at the level of separate variables, or at the level of procedures and functions, and a source code (symbolic) debugger for the dynamic analysis of errors. A “profiler” for the generation of statement frequencies serves in the process of program optimization.  相似文献   

3.
Object-oriented programming has become a widely used, important programming paradigm that is supported in many different languages. C++ has become the most widely used object-oriented language and many C++ programmers are unfamiliar with the different approaches taken by other languages in the paradigm. This paper is intended as an introduction to a broad range of ideas in object-oriented programming. Specifically, we introduce four modern programming languages that support object-oriented programming (Oberon-2, Modula-3, Sather and Self), and show how a simple application is coded in these languages. While each of these programming languages provide support for inheritance, dynamic dispatch, code reuse, and information hiding, they do so in very different ways and with varying levels of efficiency and simplicity. The use of a simple example, based on a common programming problem, facilitates our comparison. We have coded the application in all of these languages, including C++, and we compare the compile times, object code sizes, and run times of the available implementations. Implementations of all the languages compared and all of the programs we measure are available on the Internet. Ultimately, our goal is to encourage and facilitate programmers in understanding and exploring a variety of object-oriented programming languages.  相似文献   

4.
This research enables computer literate engineers to model problems in software by minimising code they need to write. Software development is difficult for many engineers as they may have no time, experience, or access to software development tools necessary to model their problems. Using a combination of modelling via use of formulae (equations) and visualisation of the way these formulae interact, it is possible to construct modelling software without requiring code. This technique of user-driven modelling/programming (UDM/P) could be applied to any problem that requires linked equations to be represented and tracked, and results from these calculated. End-user programming could be tackled by many researchers co-operating to create specific solutions to different kinds of end-user programming problems. A stepped ontology based translation process assists with progress towards a generic solution, this is first applied to engineering modelling.  相似文献   

5.
This paper analyzes differences between a numeric and symbolic approach to inductive inference. It shows the importance of existing structures in the acquisition of further knowledge, including statistical confirmation. We present a new way of looking at Hempel's paradox, in which both existing structures and statistical confirmation play a role in order to decrease the harm it does to learning. We point out some of the most important structures, and we illustrate how uncertainty does blur but does not destroy these structures. We conclude that pure symbolic as well as pure statistical learning is not realistic, but the integration of the two points of view is the key to future progress, but it is far from trivial. Our system KBG is a first-order logic conceptual clustering system; thus it builds knowledge structures out of unrelated examples. We describe the choices done in KBG in order to build these structures, using both numeric and symbolic types of knowledge. Our argument gives us firm grounds to contradict Carnap's view that induction is nothing but uncertain deduction, and to propose a refinement to Popper's purely deductive view of the growth of science. In our view, progressive organization of knowledge plays an essential role in the growth of new (inductive) scientific theories, that will be confirmed later, quite in the Popperian way.  相似文献   

6.
We study the effect of adding a rule to a rule-based heuristic classification expert system, in particular, a rule that causes an unforeseen interaction with rules already in the rule set. We show that it is possible for such an interaction to occur between sets of rules, even when no interaction is present between any pair of rules contained in these sets. A method is presented that identifies interactions between sets of rules, and an analysis is given which relates these interactions to rule-based programming practices which help to maintain die integrity of the knowledge base. We argue mat the method is practical, given some reasonable assumptions on the knowledge base.  相似文献   

7.
The expression, ‘the culture of the artificial’ results from the confusion between nature and culture, when nature mingles with culture to produce the ‘artificial’ and science becomes ‘the science of the artificial’. Artificial intelligence can thus be defined as the ultimate expression of the crisis affecting the very foundation of the system of legitimacy in Western society, i.e. Reason, and more precisely, Scientific Reason. The discussion focuses on the emergence of the culture of the artificial and the radical forms of pragmatism, sophism and marketing from a French philosophical perspective. The paper suggests that in the postmodern age of the ‘the crisis of the systems of legitimacy’, the question of social acceptability of any action, especially actions arising out of the application of AI, cannot be avoided.  相似文献   

8.
Traditional High-Performance Computing (HPC) based big-data applications are usually constrained by having to move large amount of data to compute facilities for real-time processing purpose. Modern HPC systems, represented by High-Throughput Computing (HTC) and Many-Task Computing (MTC) platforms, on the other hand, intend to achieve the long-held dream of moving compute to data instead. This kind of data-aware scheduling, typically represented by Hadoop MapReduce, has been successfully implemented in its Map Phase, whereby each Map Task is sent out to the compute node where the corresponding input data chunk is located. However, Hadoop MapReduce limits itself to a one-map-to-one-reduce framework, leading to difficulties for handling complex logics, such as pipelines or workflows. Meanwhile, it lacks built-in support and optimization when the input datasets are shared among multiple applications and/or jobs. The performance can be improved significantly when the knowledge of the shared and frequently accessed data is taken into scheduling decisions.To enhance the capability of managing workflow in modern HPC system, this paper presents CloudFlow, a Hadoop MapReduce based programming model for cloud workflow applications. CloudFlow is built on top of MapReduce, which is proposed not only being data aware, but also shared-data aware. It identifies the most frequently shared data, from both task-level and job-level, replicates them to each compute node for data locality purposes. It also supports user-defined multiple Map- and Reduce functions, allowing users to orchestrate the required data-flow logic. Mathematically, we prove the correctness of the whole scheduling framework by performing theoretical analysis. Further more, experimental evaluation also shows that the execution runtime speedup exceeds 4X compared to traditional MapReduce implementation with a manageable time overhead.  相似文献   

9.
Current literature on organizational learning tends to be theoretically fragmented, drawing on analogies to individual learning theory or simply using organizational learning as an umbrella concept for many different kinds of organizational change or adaptation. This paper introduces a framework for the analysis of organizations as knowledge systems (Holzner & Marx, 1979) composed of a collection of knowledge processes: constructing, organizing, storing, distributing, and applying. The knowledge system framework draws heavily on the sociology of knowledge and emphasizes the social nature of each of these constitutive processes. The paper uses the framework to analyze the case of a small engineering consulting company that implemented a new information system to automate one of its core business activities: energy audits of commercial buildings. Traditional approaches to organizational learning have emphasized the ways in which information systems can lower the costs and increase capacity for search, storage, and retrieval of information. The knowledge system framework suggests a deeper level of influence, whereby information systems can also affect the objects of knowledge and the criteria for knowledge construction.  相似文献   

10.
基于动态AOP的构件交互行为监测器   总被引:1,自引:0,他引:1  
在开放、动态的网络环境中,分布式软件呈现出规模庞大、松散聚合、行为复杂等特点,为有效监测其交互行为,提出了基于动态AOP的监测器模型,使得监测器能以更为灵活、松散、透明的方式融入目标系统;利用动态织入机制,能在目标系统运行过程中动态增加或删除监测器,提高了监测的动态性;并在此基础上实现了该监测器并应用于分布式电子商务应...  相似文献   

11.
Documents are the products of the deliberate social and cultural activities of people and are included in the subject matter of many scientific disciplines. The formation of the concept of a “document” as interdisciplinary category is one of the problems that the epistemology of documents deals with. The historical method of epistemology applies to the evolution of document communication. To represent stages of document communication development, the concept of an episteme is used. Common interpretations of documents are considered, which have interdisciplinary significance and favor the understanding of document as category of science as a whole. It is shown that development of the general definition of document can’t be considered the problem of any particular documentation theory. It is the problem of unifying theory (metatheory), which is called documentology in contemporary investigations.  相似文献   

12.
BSPlib: The BSP programming library   总被引:1,自引:0,他引:1  
BSPlib is a small communications library for bulk synchronous parallel (BSP) programming which consists of only 20 basic operations. This paper presents the full definition of BSPlib in C, motivates the design of its basic operations, and gives examples of their use. The library enables programming in two distinct styles: direct remote memory access (DRMA) using put or get operations, and bulk synchronous message passing (BSMP). Currently, implementations of BSPlib exist for a variety of modern architectures, including massively parallel computers with distributed memory, shared memory multiprocessors, and networks of workstations. BSPlib has been used in several scientific and industrial applications; this paper briefly describes applications in benchmarking, Fast Fourier Transforms (FFTs), sorting, and molecular dynamics.  相似文献   

13.
In this paper, the foundations for setting up a knowledge industry are laid. Firstly, it is established that this industry constitutes the only way of making use of the huge amounts of knowledge produced as a result of the introduction of the Science-Technology binomial in postindustrial society. Then, the elements which will lead to such an industry are defined, that is, the resources and means. Under the ‘Means’ section, special emphasis is placed on the processes involved, in other words, inference methods and commonsense reasoning. Finally, it is concluded that the establishment of this industry, calledmindfacturing because of the raw material that it processes and uses, is, more than possible, desirable, provided that the precautions outlined in the epilogue are taken.  相似文献   

14.
Linear programming problems represent the most thoroughly analyzed and widely solved class of parameter optimization problems. In Part II, we shall restrict our attention to this general class of problems. The characteristics of the admissible region are investigated and established. The Kuhn-Tucker conditions developed in Part I are applied to establish necessary and sufficient conditions that must be satisfied at a minimum. Included in the discussion is a consideration of dual linear programming problems. Then, we direct our attention to the question of determining the solution of specific problems. A general algorithm known as the Simplex Method is described and applied to several examples.  相似文献   

15.
Many discrete optimization problems can be formulated as either integer linear programming problems or constraint satisfaction problems. Although ILP methods appear to be more powerful, sometimes constraint programming can solve these problems more quickly. This paper describes a problem in which the difference in performance between the two approaches was particularly marked, since a solution could not be found using ILP.The problem arose in the context of organizing a progressive party at a yachting rally. Some yachts were to be designated hosts; the crews of the remaining yachts would then visit the hosts for six successive half-hour periods. A guest crew could not revisit the same host, and two guest crews could not meet more than once. Additional constraints were imposed by the capacities of the host yachts and the crew sizes of the guests.Integer linear programming formulations which included all the constraints resulted in very large models, and despite trying several different strategies, all attempts to find a solution failed. Constraint programming was tried instead and solved the problem very quickly, with a little manual assistance. Reasons for the success of constraint programming in this problem are identified and discussed.  相似文献   

16.
17.
The separation principle: A programming paradigm   总被引:1,自引:0,他引:1  
In the development of any technology, there's always a tendency to lose sight of the basic problems that stimulated its introduction in the first place. Technologies in software construction are no exception. One of the essential parts of software construction is the underlying programming paradigm. In the last few decades, different software construction paradigms have evolved - for example, structured programming in the 1970s and object-oriented programming in the 1980s. Although OOP dominates current software construction, it introduces several problems that might raise questions about its understandability and efficiency. W. Cave (1995) was the first to propose and develop the separation principle as a programming paradigm. One major contribution is that it moves most complex issues related to data sharing, parameter passing, and scope into a programming model in which simple graphics can represent access to data. The separation principle can simplify or even eliminate these issues. By expressing access permissions in simple, canonical drawings, the separation principle makes it easy to comprehend the full range of relationships between data and instructions, and thus the code's structure. A simple way to show connectivity, a key property affecting program understandability, is by separating data from instructions. Two of prediction systems' flagship products, the visual software environment (VSE) and the general simulation system (GSS), embody this idea. This paper discusses the separation principle in the context of conventional languages such as C and C++.  相似文献   

18.
Ronald F. Brender 《Software》2002,32(10):955-981
The BLISS programming language was invented by William A. Wulf and others at Carnegie‐Mellon University in 1969, originally for the DEC PDP‐10. BLISS‐10 caught the interest of Ronald F. Brender of DEC (Digital Equipment Corporation). After several years of collaboration, including the creation of BLISS‐11 for the PDP‐11, BLISS was adopted as DEC's implementation language for use on its new line of VAX computers in 1975. DEC developed a completely new generation of BLISSs for the VAX, PDP‐10 and PDP‐11, which became widely used at DEC during the 1970s and 1980s. With the creation of the Alpha architecture in the early 1990s, BLISS was extended again, in both 32‐ and 64‐bit flavors. BLISS support for the Intel IA‐32 architecture was introduced in 1995 and IA‐64 support is now in progress. BLISS has a number of unusual characteristics: it is typeless, requires use of an explicit contents of operator (written as a period or ‘dot’), takes an algorithmic approach to data structure definition, has no goto , is an expression language, and has an unusually rich compile‐time language. This paper reviews the evolution and use of BLISS over its three decade lifetime. Emphasis is on how the language evolved to facilitate portable programming while retaining its initial highly machine‐specific character. Finally, the success of its characteristics are assessed. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

19.
Several experiments on the effects of pair versus solo programming have been reported in the literature. We present a meta-analysis of these studies. The analysis shows a small significant positive overall effect of pair programming on quality, a medium significant positive overall effect on duration, and a medium significant negative overall effect on effort. However, between-study variance is significant, and there are signs of publication bias among published studies on pair programming. A more detailed examination of the evidence suggests that pair programming is faster than solo programming when programming task complexity is low and yields code solutions of higher quality when task complexity is high. The higher quality for complex tasks comes at a price of considerably greater effort, while the reduced completion time for the simpler tasks comes at a price of noticeably lower quality. We conclude that greater attention should be given to moderating factors on the effects of pair programming.  相似文献   

20.
I want increased confidence in my programs. I want my own and other people's programs to be more readable. I want a new discipline of programming that augments my thought processes. Therefore, I create and explore a new discipline of programming in my BabyUML laboratory. I select, simplify and twist UML and other languages to demonstrate how they help bridge the gap between me as a programmer and the objects running in my computer The focus is on the run time objects; their structure, their interaction, and their individual behaviors. Trygve Reenskaug is professor emeritus of informatics at the University of Oslo. He has 40 years experience in software engineering research and the development of industrial strength software products. He has extensive teaching and speaking experience including keynotes, talks and tutorials. His firsts include the Autokon system for computer aided design of ships with end user programming language, structured programming, and a data base oriented architecture from 1960; object oriented applications and role (collaboration) modeling from 1973; Model-View-Controller, the world's first reusable object oriented framework, from 1979; OOram role modeling method and tool from 1983. Trygve was a member of the UML Core Team and was a contributor to UML 1.4. The goal of his current research is to create a new, high level discipline of programming that lets us reclaim the mastery of software.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号