首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
Most of the problems with existing computer musicological systems have to do with the lack of capability for capturing the important notion of musical structure. In the new language SML (AStructuredMusicalLanguage), this aspect is given foremost attention as a technique for encoding a musical score in a clear and vivid form. Musical structures are patterned after the control structures of Pascal, together with instrument representations modeled on the idea of a Pascal record type. A complete example of the SML encoding of a Schumann song is included.Ronald E. Prather is the Caruth Distinguished Professor of Computer Science at Trinity University, San Antonio, Texas 78284. Stephen Elliott is a doctoral candidate in Computer Science at the University of Colorado, Boulder, Colorado 80309.  相似文献   

2.
In this paper the problems of the suboptimal H controller order reduction and strictly positive real (SPR) controller order reduction via coprime factorization are studied. The sufficient conditions to ensure the reduced order controllers also being suboptimal H controllers and SPR are given, respectively. The conditions presented may be considered as frequency weighted model reduction problems. We generalize the result of C?(θ) approach in Goddard (Ph.D. Thesis, Trinity College, Cambridge, 1995) for controller order reduction with an H framework and the relationship between our results and some other existing results is established. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

3.
4.
This article introduces two fast algorithms for connected component Labeling of binary images, a peculiar case of coloring. The first one, Selkow DT is pixel-based and a Selkow’s algorithm combined with the decision tree optimization technique. The second one called light speed labeling is segment-based line-relative labeling and was especially thought for commodity RISC architectures. An extensive benchmark on both structured and unstructured images substantiates that these two algorithms, the way they were designed, run faster than Wu’s algorithm claimed to be the world fastest in 2007. Also they both show greater data independency hence runtime predictability.  相似文献   

5.
Kiem‐Phong Vo 《Software》2000,30(2):107-128
Over the past few years, my colleagues and I have written a number of software libraries for fundamental computing tasks, including I/O, memory allocation, container data types and sorting. These libraries have proved to be good software building blocks, and are used widely by programmers around the world. This success is due in part to a library architecture that employs two main interface mechanisms: disciplines to define resource requirements; and methods to parameterize resource management. Libraries built this way are called discipline and method libraries. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

6.
This paper proposes an architecture for the back-end of a federated national datastore for use by academic research communities, developed by the e-INIS (Irish National e-InfraStructure) project, and describes in detail one member of the federation, the regional datastore at Trinity College Dublin. It builds upon existing infrastructure and services, including Grid-Ireland, the National Grid Initiative and EGEE, Europe’s leading Grid infrastructure. It assumes users are in distinct research communities and that their data access patterns can be described via two properties, denoted as mutability and frequency-of-access. The architecture is for a back-end—individual academic communities are best qualified to define their own front-end services and user interfaces. The proposal is designed to facilitate front-end development by placing minimal restrictions on how the front-end is implemented and on the internal community security policies. The proposal also seeks to ensure that the communities are insulated from the back-end and from each other in order to ensure quality of service and to decouple their front-end implementation from site-specific back-end implementations.  相似文献   

7.
The aim of the rapid world modeling project is to implement a system to visualize the topography of the entire world on consumer‐level hardware. This presents a significant problem in terms of both storage requirements and rendering speed. This paper presents the ‘Tiled Quad Tree’, a technique and format for the storage of digital terrain models, to work as part of an integrated system for the visualization of global terrain data. We show how this format efficiently stores and compresses elevation data, in a way that allows the data to be read very rapidly from hard disk or similar storage medium, to facilitate real‐time rendering. The results of compressing several distinct data sets are presented.  相似文献   

8.
This paper extends Reiter’s closed world assumption to cases where the assumption is applied in a precedence order between predicates. The extended assumptions are: thepartial closed world assumption, thehierarchical closed world assumption and thestepwise closed world assumption. The paper also defines an extension of Horn formulas and shows several consistency results about the theory obtained from the extended Horn formulas by applying the proposed assumptions. In particular, the paper shows that both the hierarchical closed world assumption and the stepwise closed world assumption characterize the perfect model of stratified programs.  相似文献   

9.
Multi-field inputs are techniques driven by multiple short-range RFID-enabled artifacts like RFID-tags and RFID-tag readers. The technology is useful for designers so as to enable the construction of advanced interaction through the physical world. To take advantage of such opportunities, it is important to understand the technology in terms of what interactions it might offer designers. I address this issue by unwrapping and exposing elements that can be used to conceptualize multi-field interactions. This is done by way of a design driven inquiry in which design and research methods are used to investigate short-range RFID technology. My approach is informed by activity theory which I use to analyze RFID technology from a design perspective. The study presents multi-field relations as a conceptual framework that can be used to describe and generate multi-field inputs. Four types of multi-field relations are discussed: one-way, two-way, sequence and multiple relations. These are described and analyzed in context of a set of multi-field input examples. The multi-field relations expose elements that can be used to construct interactions. This is important for interaction designers, since new interactions presents designers with opportunities for making entirely new types of interfaces that can lead to interesting and surprising experiences.  相似文献   

10.
From ‘virtual worlds’ to ‘artificial realities’, from ‘cyberspace’ to ‘multisensory sythetic environments’, there is no lack of colourful expressions to describe one of the most recent and the most promising developments of computer graphics. Indeed, this is a radically new tool for representing the world, capable of permanently changing our way of looking at things and the way we work, as well as the familiar concept of a show. What is the definition of a ‘virtual environment’? It is an artificial space, visualized through techniques of synthetic imagery, and in which we can ‘physically’ move about. This impression of ‘physical movement’ is produced by the concurrence of two sensory stimuli, one based on fully stereoscopic vision and the other on the so-called ‘proprioceptive’ sensation of muscular correlation between real bodily movements and apparent changes in the artificial space in which we are ‘immersed’.  相似文献   

11.
Distributed Coordination and Workflow on the World Wide Web   总被引:5,自引:0,他引:5  
This paper describes WebFlow, an environment thatsupports distributed coordination services on theWorld Wide Web. WebFlow leverages the HTTP Webtransport protocol and consists of a number of toolsfor the development of applications that require thecoordination of multiple, distributed servers.Typical applications of WebFlow include distributeddocument workspaces, inter/intra-enterprise workflow,and electronic commerce. In this paper we describe thegeneral WebFlow architecture for distributedcoordination, and then focus on the environment fordistributed workflow.  相似文献   

12.
Service oriented computing (SOC) has brought a simplification in the way distributed applications can be built. Mainstream approaches, however, failed to support dynamic, self-managed compositions that would empower even non-technical users to build their own orchestrations. Indeed, because of the changeable world in which they are embedded, service compositions must be able to adapt to changes that may happen at run-time. Unfortunately, mainstream SOC languages, like BPEL and BPMN, make it quite hard to develop such kind of self-adapting orchestrations. We claim that this is mostly due to the imperative programming paradigm they are based on. To overcome this limitation we propose a radically different, strongly declarative approach to model service orchestration, which is easier to use and results in more flexible and self-adapting orchestrations. An ad-hoc engine, leveraging well-known planning techniques, interprets such models to support dynamic service orchestration at run-time.  相似文献   

13.
By linking Knowledge Engineering to Kantian Philosophy, this paper attempts to elaborate a potential theoretical foundation for understanding the nature of expertise and the processes of modeling. The established way of modeling is criticized for presuming the relation between the real objective world and the AI software models being a “mapping” (abstraction, copy, representation, etc.) relation. The paper argues that this aspect of the foundations of modeling in Knowledge Engineering could be improved by employing the standpoint presented in Kant's Critique of Pure Reason. Two hypotheses and ten principles of a constructivist modeling paradigm based on Kant's work are proposed. The term Constructivist here refers to the hypothesis that a model cannot “correspond to” reality but merely be “viable in” (i.e., “fit into”) reality. © John Wiley & Sons, Inc.  相似文献   

14.
The purpose of a knowledge systemS is to represent the worldW faithfully. IfS turns out to be inconsistent containing contradictory data, its present state can be viewed as a result of information pollution with some wrong data. However, we may reasonably assume that most of the system content still reflects the world truthfully, and therefore it would be a great loss to allow a small contradiction to depreciate or even destroy a large amount of correct knowledge. So, despite the pollution,S must contain a meaningful subset, and so it is reasonable to assume (as adopted by many researchers) that the semantics of a logic system is determined by that of its maximally consistent subsets,mc-subsets. The information contained inS allows deriving certain conclusions regarding the truth of a formulaF inW. In this sense we say thatS contains a certain amount ofsemantic information and provides anevidence of F. A close relationship is revealed between the evidence, the quantity of semantic information of the system, and the set of models of its mc-subsets. Based on these notions, we introduce thesemantics of weighted mc-subsets as a way of reasoning in inconsistent systems. To show that this semantics indeed enables reconciling contradictions and deriving plausible beliefs about any statement including ambiguous ones, we apply it successfully to a series of justifying examples, such as chain proofs, rules with exceptions, and paradoxes.  相似文献   

15.
The effect of aperture shape on an image, known in photography as ‘bokeh’, is an important characteristic of depth of field in real‐world cameras. However, most real‐time depth of field techniques produce Gaussian bokeh rather than the circular or polygonal bokeh that is almost universal in real‐world cameras. ‘Scattering’ (i.e. point‐splatting) techniques provide a flexible way to model any aperture shape, but tend to have prohibitively slow performance, and require geometry‐shaders or significant engine changes to implement. This paper shows that simple post‐process ‘gathering’ depth of field shaders can be easily extended to simulate certain bokeh effects. Specifically we show that it is possible to efficiently model the bokeh effects of square, hexagonal and octagonal apertures using a novel separable filtering approach. Performance data from a video game engine test demonstrates that our shaders attain much better frame rates than a naive non‐separable approach.  相似文献   

16.
ABSTRACT

Mass loss from glaciers and ice caps represents the largest terrestrial component of current sea level rise. However, our understanding of how the processes governing mass loss will respond to climate warming remains incomplete. This study explores the relationship between surface elevation changes (dh/dt), glacier velocity changes (du/dt), and bedrock topography at the Trinity-Wykeham Glacier system (TWG), Canadian High Arctic, using a range of satellite and airborne datasets. We use measurements of dh/dt from ICESat (2003–2009) and CryoSat-2 (2010–2016) repeat observations to show that rates of surface lowering increased from 4 m yr?1 to 6 m yr?1 across the lowermost 10 km of the TWG. We show that surface flow rates at both Trinity Glacier and Wykeham Glacier doubled over 16 years, during which time the ice front retreated 4.45 km. The combination of thinning, acceleration and retreat of the TWG suggests that a dynamic thinning mechanism is responsible for the observed changes, and we suggest that both glaciers have transitioned from fully grounded to partially floating. Furthermore, by comparing the separate glacier troughs we suggest that the dynamic changes are modulated by both lateral friction from the valley sides and the complex geometry of the bed. Further, the presence of bedrock ridges induces crevassing on the surface and provides a direct link for surface meltwater to reach the bed. We observe supraglacial lakes that drain at the end of summer and are concurrent with a reduction in glacier velocity, suggesting hydrological connections between the surface and the bed significantly impact ice flow. The bedrock topography thus has a primary influence on the nature of the changes in ice dynamics observed over the last decade.  相似文献   

17.
Already in 1994 the term Projective Virtual Reality was coined and a first implementation was used to control a complex multirobot system in Germany over the Internet from California. Building on this foundation, the general aim of the development of virtual reality technology for automation applications at the Institute of Robotics Research (IRF) today is to provide the framework for Projective Virtual Reality for a broad range of applications. The general idea of Projective Virtual Reality is to allow users to “project” actions carried out in the virtual world into the real world by means of robots or other means of automation. The framework is based on a task‐oriented approach which builds on the “task deduction” capabilities of a newly developed virtual reality system and a task planning component. The advantage of this approach is that robots which work at great distances from the control station can be controlled as easily and intuitively as robots that work right next to the control station. Robot control technology now provides the user in the virtual world with a “prolonged arm” into the physical environment, thus paving the way for intuitive control of complex systems over the Internet—and in general for a new quality of user‐friendly man‐machine interfaces for automation applications. Lately, this work has been enhanced by a new structure that allows one to distribute the virtual reality application over multiple computers on a network. With this new feature, it is now possible for multiple users to share the same virtual room, although they may physically be thousands of miles apart. They only need an Internet connection to share this new experience. Lately, the network distribution techniques have been further developed to not just allow users to cooperate over networked PCs but also to be able to set up a panorama projection or a cave running of a networked cluster of PCs. This approach cuts down the costs for such a high‐end visualization environment drastically and allows for a new range of applications. © 2005 Wiley Periodicals, Inc.  相似文献   

18.

Man-in-the-Middle (MitM), one of the best known attacks in the world of computer security, is among the greatest concerns for professionals in the field. Main goal of MitM is to compromise confidentiality, integrity and availability of data flowing between source and destination. However, most of its many variants involve difficulties that make it not always possible. The present paper aims at modelling and describing a new method of attack, named Browser-in-the-Middle (BitM) which, despite the similarities with MitM in the way it controls the data flow between a client and the service it accesses, bypasses some of MitM’s typical shortcomings. It could be started by phishing techniques and in some cases coupled to the well-known Man-in-the-Browser (MitB) attack. It will be seen how BitM expands the range of the possible attacker’s actions, at the same time making them easier to implement. Among its features, the absence of the need to install malware of any kind on the victim’s machine and the total control it allows the attacker are to be emphasized.

  相似文献   

19.
Denis L. Baggi 《AI & Society》2000,14(3-4):348-378
In its forty years of existence, Artificial Intelligence has suffered both from the exaggerated claims of those who saw it as the definitive solution of an ancestral dream — that of constructing an intelligent machine-and from its detractors, who described it as the latest fad worthy of quacks. Yet AI is still alive, well and blossoming, and has left a legacy of tools and applications almost unequalled by any other field-probably because, as the heir of Renaissance thought, it represents a possible bridge between the humanities and the natural sciences, philosophy and neurophysiology, psychology and integrated circuits-including systems that today are taken for granted, such as the computer interface with mouse pointer and windows. This writing describes a few results of AI that have modified the scientific world, as well as the way a layman sees computers: thetechnology of programming languages, such asLISP-witness the unique excellence of academic departments that have contributed to them-thecomputing workstations-of which our modern PC is but a vulgarised descendant-theapplications to the educational field-e.g., the realisation of some ideas of genetic epistemology-and tointerdisciplinary philosophy-such as Hofstadter's associations between the arts and mathematics-and the use ofAI techniques in music and musicology. All this has led to a generalisation of AI towards Negrotti's overallTheory of the Artificial, which encompasses further specialisation such asartificial reality, artificial life, and applications ofneural networks among others.  相似文献   

20.
This article is related to the research effort of constructing an intelligent agent, i.e., a computer system that is able to sense its environment (world), reason utilizing its internal knowledge and execute actions upon the world (act). the specific part of this effor presented in this article is reinforcement learning, i.e., the process of acquiring new knowledge based upon an evaluative feedback, called reinforcement, received by tht agent through interactions with the world. This article has two objectives: (1) to give a compact overview of reinforcement learning, and (2) to show that the evolution of the reinforcement learning paradigm has been driven by the need for more efficient learning through the addition of more structure to the learning agent. Therefore, both main ideas of reinforcement learning are introduced, and structural solutions to reinforcemen learning are reviewed. Several architectural enhancements of the RL paradigm are discussed. These include incorporation of state information in the learning process, architectural solutions to learning with delayed reinforcement, dealing with structurally changing worlds through utilization of multiple models of the world, and focusing attention of the learning agent through active perception. the paper closes with an overview of directions for applications and for future research in this area. © 1993 John Wiley & Sons, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号