首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this, the First part of a two part work, a general model of spatial organization is introduced. Following a brief synopsis of some of Spinoza's and Leibniz's views regarding natural structure, an extension of the Spinozian model is presented in which the attribute spatial extension is portrayed as a relational system that implicitly underlies the differentiation of sensible space into “modifications” (“natural systems”) and the latter's subdifferentiation into “modes” On the basis of this model, all instances of modal differentiation are understood to take place in a manner explained by this relational structure, the existence (but not the specific characteristics) of which is initially assumed. The nature of the structure is then deduced according to a “most-probable-state” kind of logic; next, it is demonstrated via simulation that the resulting aspatial model of internal relations has a corresponding spatial interpretation (and therefore, in theory, that sensible space structures can be supported by the particular rational ordering posed). The matter of how to apply the model to the study of real world systems is taken up last; discussion focuses on related aspects of the treatment of equilibrium and nonequilibrium systems and the recognition and measurement of modal structures.  相似文献   

2.
3.
4.
Pattern recognition based on modelling each separate class by a separate principal components (PC) model is discussed. These PC models are shown to be able to approximate any continuous variation within a single class. Hence, methods based on PC models will, provided that the data are sufficient, recognize any pattern that exists in a given set of objects. In addition, fitting the objects in each class by a separate PC model will, in a simple way, provide information about such matters as the relevance of single variables, “outliers” among the objects and “distances” between different classes. Application to the classical Iris-data of Fisher is used as an illustration.  相似文献   

5.
In the built environment, places such as retail outlets and public sites are embedded in the spatial context formed by neighboring places. We define the sets of these symbiotic places in the proximity of a focal place as the place's “place niche”, which conceptually represents the features of the local environment. While current literature has focused on pairwise spatial colocation patterns, we represent the niche as an integrated feature for each type of place, and quantify the niches' variation across cities. Here, with point of interest (POI) data as an approximation of places in cities, we propose representation learning models to explore place niche patterns. The models generate two main outputs: first, distributed representations for place niche by POI category (e.g. Restaurant, Museum, Park) in a latent vector space, where close vectors represent similar niches; and second, conditional probabilities of POI appearance of each place type in the proximity of a focal POI. With a case study using Yelp data in four U.S. cities, we reveal spatial context patterns and find that some POI categories have more unique surroundings than others. We also demonstrate that niche patterns are strong indicators of the function of POI categories in Phoenix and Las Vegas, but not in Pittsburgh and Cleveland. Moreover, we find that niche patterns of more commercialized categories tend to have less regional variation than others, and the city-level niche-pattern changes for POI categories are generally similar only between certain city pairs. By exploring patterns for place niche, we not only produce geographical knowledge for business location choice and urban policymaking, but also demonstrate the potential and limitations of using spatial context patterns for GIScience tasks such as information retrieval and place recommendation.  相似文献   

6.
7.
This paper reviews and extends a technique to detect weak coupling (one-way coupling or complete decoupling) among elements of a dynamic system model, and to partition and reduce models in which weak coupling is found. The ability to partition a model increases the potential for physical-domain model reduction, and allows parallel simulation of smaller individual submodels that can reduce computation time. Negligible constraint equation terms are identified and eliminated in a bond graph by converting inactive power bonds to modulated sources. If separate bond graphs result, between which all modulating signals move from a “driving” subgraph to a “driven” one, then one-way coupling exists in the model and it can be separated into driving and driven partitions. Information flow between the subgraphs is one-way.In this paper the algorithm is extended to models in which two-way information flow from modulating signals precludes complete partitioning. It is shown for several classes of modulating signals that, under certain conditions the signal is “weak” and therefore can be eliminated. Removal of weak signals allows partitioning of the longitudinal and pitch dynamics of a medium-duty truck model. The intensity of dynamic coupling and the potential for model reduction are shown to depend on the magnitude of system parameters and the severity of inputs such as road roughness.  相似文献   

8.
Non-symmetric similarity relation-based rough set model (NS-RSM) is viewed as mathematical tool to deal with the analysis of imprecise and uncertain information in incomplete information systems with “?” values. NS-RSM relies on the concept of non-symmetric similarity relation to group equivalent objects and generate knowledge granules that are then used to approximate the target set. However, NS-RSM results in unpromising approximation space when addressing inconsistent data sets that have lots of boundary objects. This is because objects in the same similarity classes are not necessarily similar to each other and may belong to different target classes. To enhance NS-RSM capability, we introduce the maximal limited similarity-based rough set model (MLS-RSM) which describes the maximal collection of indistinguishable objects that are limited tolerance to each other in similarity classes. This allows accurate computation to be done for the approximation space. Furthermore, approximation accuracy comparisons have been conducted among NS-RSM and MLS-RSM. The results demonstrate that MLS-RSM model outperforms NS-RSM and can approximate the target set more efficiently.  相似文献   

9.
This paper presents a methodology to extract information from a set of r labeled counts with constant sum N called an “Organ Pipe Diagram (OPD)”. A histogram is a particular OPD with an order relation on its count labels.As an application of this methodology, if a global image of radiometries is scanned by a window, a “module” value and a “state” can be defined for the local histogram. The corresponding maps of modules and states yield a useful local spatial information closely related to texture. The methodology is interesting in itself, as it shows how a geometry can be built-up from a set of labeled counts whose space of definition is shown to be a simplex.  相似文献   

10.
A formalization of some simulation modeling concepts is described, which is used as a conceptual framework among simulationists and in the educational process. The notion of dynamic system is hierarchically constructed from attributes and classes: the domains of the classes are formed by elements with “similar” attributes and a set of classes is a system if it satisfies certain axioms. A simulation model is composed of two systems and four mappings also satisfying axioms. This conception reflects the capability of simulation languages to not only describe the dynamics of systems but also their structure. The theory is a framework for any sort of simulation models (discrete, continuous, combined, hybrid, analogue, galvanical, aerodynamical etc.) and allows a classification of the set of all simulation languages according to their semantics.  相似文献   

11.
An important aspect in developing “intelligent” CAD systems is related to methodologies which are reliable both in handling the relevant information and in modelling the design processes themselves. The intrinsically dynamic nature of these last makes particularly difficult choosing the most suitable methodology to use. This paper describes a prototype Design System, which is capable of generating and dynamically modifying models of design processes and product data that can be considered as Engineering Knowledge Data Bases. The system, implemented in LPA-Prolog++, is based on a hybrid approach that enforces both the Object-Oriented and the Frame paradigms. Frames are mainly devoted to the implementation of the computational model required by the components involved in the design process, while classes are used to assure the inheritance of properties when objects are instantiated or specialized. Information in the Knowledge Data Base is structured using two types of frames: dataFrames, that support the static computational model of the low level components; and linkFrames, that allow for the collection of the low level components into more complex ones without restraining the activation sequence of the pointed frames. In addition, the management of history slots was added to the system in order to associate to the data and process models, at an atomic level, all information related to the rationale of the choices made. The functionality of the system is presented by means of relevant test cases.  相似文献   

12.
Numerical simulations of air quality models provide unique and different outputs for different choices of grid size. Thus, an important task is to understand the characteristics of model outcomes as a function of grid size in order to assess the quality of the model as to its fitness for meeting a specific design objective. This type of assessment is somewhat different than that of traditional operational performance and diagnostic type model evaluation. There, the objective is towards assessing errors in numerical models of air quality and utilizing concentration measurements from monitors to provide the bases for guidance towards model improvement and for their assessment of ability to predict and retrospectively map air quality. However, observations used as “truth” to assess model performance have themselves properties unique to the data collection protocols, siting and spatial density of deployment. In the data assimilation community, the term “model error” is used for the difference between model output given perfect inputs and the “truth” (Kalnay, 2003). In this paper, we are concerned with one aspect of this “model error”, the discrepancy due to discretization of space by choice of grid size in the model. To understand discrepancy due to discretization, outputs from the Community Multiscale Air Quality model (CMAQ) at two resolutions are studied. The lower resolution run is carried out so that its initial and boundary conditions are as similar as possible to those for the higher resolution run, thus minimizing this source of discrepancies and allowing us to isolate discrepancies due to discretization. Differences are analyzed from a statistical perspective by comparing marginal distributions of two outputs and considering spatial variation of the differences. Results indicate sharp increases in spatial variation of the differences for the first few hours of running the model, followed by small increases thereafter. The spatial variation of the differences depends on the individual spatial structure of the original processes, which we show varies with the time of day. We also show that the spatial variations on sub-regions depend on whether the sub-region is in a rural or an urban area.  相似文献   

13.
《Computers & Geosciences》2006,32(8):1052-1068
The most crucial and difficult task in landslide hazard analysis is estimating the conditional probability of occurrence of future landslides in a study area within a specific time period, given specific geomorphic and topographic features. This task can be addressed with a mathematical model that estimates the required conditional probability in two stages: “relative hazard mapping” and “empirical probability estimation.” The first stage divides the study area into a number of “prediction” classes according to their relative likelihood of occurrence of future landslides, based on the geomorphic and topographic data. Each prediction class represents a relative level of hazard with respect to other prediction classes. The number of classes depends on the quantity and quality of input data. Several quantitative models have been developed and tested for use in this stage; the objective is to delineate typical settings in which future landslides are likely to occur. In this stage, problems related to different degrees of resolution in the input data layers are resolved. The second stage is to empirically estimate the conditional probability of landslide occurrence in each prediction class by a cross-validation technique. The basic strategy is to divide past occurrences of landslides into two groups, a “modeling group” and a “validation group”. The first mapping stage is repeated, but the prediction is limited to only those landslide occurrences in the modeling group that are used to construct a new set of prediction classes. The new set of prediction classes is compared to the distribution of landslide occurrences in the validation group. Statistics from the comparison provide a quantitative measure of the conditional probability of occurrence of future landslides.  相似文献   

14.
Mixed pixels are an important problem in the identification and proportion estimation of agriculturally important crops in Landsat satellite scenes. The spectral response of mixed pixels is influenced by more than one ground cover type which decreases the separability of component crop classes and hence degrades the performance of classification procedures. An algorithm called CASCADE is described which is based on spatial information and consideration of a linear mixing model. The CASCADE procedure provides a means for allocating a pixel to one of the surrounding, more “homogeneous” regions which it most closely resembles. Processing all pixels in an image with CASCADE before classification significantly increases the separability between crop classes as well as the precision of the crop proportion estimates.  相似文献   

15.
Business process modeling is an essential task in business process management. Process models that are comprehensively understood by business stakeholders allow organizations to profit from this field. In this work, we report what is being investigated in the topic “visualization of business process models”, since visualization is known as improving perception and comprehension of structures and patterns in datasets. We performed a systematic literature review through which we selected and analyzed 46 papers from two points of view. Firstly, we observed the similarities between the papers regarding their main scope. From this observation we classified the papers into six categories: “Augmentation of existing elements”, “Creation of new elements”, “Exploration of the 3D space”, “Information visualization”, “Visual feedback concerning problems detected in process models” and “Perspectives”. The less explored categories and which could represent research challenges for further exploration are “Visual feedback” and “Information visualization”. Secondly, we analyzed the papers based on a well-known visualization analysis framework, which allowed us to obtain a high-level point of view of the proposals presented in the literature and could identify that few authors explore user interaction features in their works. Besides that, we also found that exactly half of the papers base their proposals on BPMN and present results from evaluation or validation. Since BPMN is an ISO standard and there are many tools based on BPMN, there should be more research intending to improve the knowledge around this topic. We expect that our results inspire researchers for further work aiming at bringing forward the field of business process model visualization, to have the advantages of information visualization helping the tasks of business process modeling and management.  相似文献   

16.
In this paper the performance of the linear, exponential and combined models to describe the temperature dependence of the excess Gibbs energy of solutions in the framework of the Redlich–Kister model is discussed. The models are not compared to existing Calphad optimized databases, rather they are tested against the 209 binary solid and liquid metallic alloys, for which reliable experimental data exist on the heat of mixing and Gibbs energy of mixing in the handbook of Predel. It was found that the linear model often leads to high-T artifact (artificial inverted miscibility gaps) and the excess Gibbs energy approaches infinity at high temperatures, which seems unreasonable. It was also found that although both the exponential and combined models can in principle lead to low-T artifact (liquid re-stabilization), in real systems it probably does not take place, at least for the “normal” systems (a system is “normal”, if the heat of mixing, excess entropy of mixing and excess Gibbs energy of mixing have the same sign at the temperature of measurement; 86% of all systems are found “normal”). The problem with the exponential model is that it is unable to describe the “exceptional” systems (14% of all systems). It is shown that the combined model is able to describe also these “exceptional” systems, as well. An algorithm is worked out to ensure that the combined model does not run into any high-T or low-T artifact, even when it is used to describe the “exceptional” systems. It is concluded that the T-dependence of the interaction energies for all solution phases described by the Redlich–Kister polynomials should be described by the combined model. In this way an improved databank on excess Gibbs energies of solution phases can be gradually built, not leading to any artifact.  相似文献   

17.
Previous work in natural language génération has exploited discourse focus to guide the selection of propositional content and the génération of referring expressions (e.g., pronominalization, définite noun phrase génération). However, there are many other sources of contextual information which can be used to constrain linguistic realization. The realization of certain classes of temporal and spatial référents, for example, can be guided by more detailed models of time and space. Therefore, this article first identifies a number of contextual coordinates or points of référence (known as indexicals in linguistics). Next, we formalize three of these contextual coordinates – topic, time, and space – in a computational model. In doing so, the article describes the use of a Reichenbachian temporal model which is exploited to guide the realization of verb tense and aspect as well as the realization of temporal référents (e.g., temporal connectives and adverbials such as "meanwhile" and "ten minutes later"). The article then describes a spatial model which is used to guide the linguistic realization of spatial référents (e.g., "here,""there") and spatial adverbials (e.g., "two miles away"). We discuss the relation between these temporal and spatial models and illustrate their use in generating text from two different application systems. We find that the combined use of topical, temporal, and spatial contextual coordinates enhances the fluency, connectivity, and conciseness of the resulting text.  相似文献   

18.
VQA attracts lots of researchers in recent years. It could be potentially applied to the remote consultation of COVID-19. Attention mechanisms provide an effective way of utilizing visual and question information selectively in visual question and answering (VQA). The attention methods of existing VQA models generally focus on spatial dimension. In other words, the attention is modeled as spatial probabilities that re-weights the image region or word token features. However, feature-wise attention cannot be ignored, as image and question representations are organized in both spatial and feature-wise modes. Taking the question “What is the color of the woman’s hair” for example, identifying the hair color attribute feature is as important as focusing on the hair region. In this paper, we propose a novel neural network module named “multimodal feature-wise attention module” (MulFA) to model the feature-wise attention. Extensive experiments show that MulFA is capable of filtering representations for feature refinement and leads to improved performance. By introducing MulFA modules, we construct an effective union feature-wise and spatial co-attention network (UFSCAN) model for VQA. Our evaluation on two large-scale VQA datasets, VQA 1.0 and VQA 2.0, shows that UFSCAN achieves performance competitive with state-of-the-art models.  相似文献   

19.
Directional relations and frames of reference   总被引:1,自引:0,他引:1  
As an intermediate category between metric and topology, directional relations are as much varied as “right of”, “before”, “between”, “in front of”, “back”, “north of”, “east of”, and so on. Directional relations are ambiguous if taken alone without the contextual information described by frames of reference. In this paper, we identify a unifying framework for directional relations and frames of reference, which shows how a directional relation with its associated frame of reference can be mapped to a projective relation of the 5-intersection model. We discuss how this knowledge can be integrated in spatial query languages.  相似文献   

20.
A symmetric Turing machine is one such that the “yields” relation between configurations is symmetric. The space complexity classes for such machines are found to be intermediate between the corresponding deterministic and nondeterministic space complexity classes. Certain natural problems are shown to be complete for symmetric space complexity classes, and the relationship of symmetry to determinism and nondeterminism is investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号