首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

2.
In traditional approaches to object-oriented programming, objects are active, while relations between them are passive. The activeness of an object reveals itself when the object invokes a method (function) as a reaction to a message from another object (or itself). While this model is suitable for some tasks, like arranging interactions between windows, widgets and the end-user in a typical GUI environment, it's not appropriate for others. Business applications development is one of the examples. In this domain, relations between conceptual objects are at least as important as objects themselves and the more appropriate model for this field would be the one where relations are active while objects are passive. A version of such a model is presented in the paper. The model considers a system as consisting of a set of objects, a code of laws, and a set of connectors, each connector hanging on a group of objects that must obey a certain law. The formal logical semantics of this model is presented as a way of analyzing the set of all possible trajectories of all possible systems. The analysis allows to differentiate valid trajectories from invalid ones. The procedural semantics is presented as a state machine that given an initial state, generates all possible trajectories that can be derived from this state. This generator can be considered as a model of a connectors scheduler that allows various degrees of parallelism, from sequential execution to the maxim possible parallelism. In conclusion, a programming language that could be appropriate for the proposed computer environment is discussed, and the problems of applying the model to the business domain are outlined.  相似文献   

3.
The Structure of Locally Orderless Images   总被引:3,自引:2,他引:1  
We propose a representation of images in which a global, but not a local topology is defined. The topology is restricted to resolutions up to the extent of the local region of interest (ROI). Although the ROI's may contain many pixels, there is no spatial order on the pixels within the ROI, the only information preserved is the histogram of pixel values within the ROI's. This can be considered as an extreme case of a textel (texture element) image: The histogram is the limit of texture where the spatial order has been completely disregarded. We argue that locally orderless images are ubiquitous in perception and the visual arts. Formally, the orderless images are most aptly described by three mutually intertwined scale spaces. The scale parameters correspond to the pixellation (inner scale), the extent of the ROI's (outer scale) and the resolution in the histogram (tonal scale). We describe how to construct locally orderless images, how to render them, and how to use them in a variety of local and global image processing operations.  相似文献   

4.
Efficient and effective Querying by Image Content   总被引:35,自引:0,他引:35  
In the QBIC (Query By Image Content) project we are studying methods to query large on-line image databases using the images' content as the basis of the queries. Examples of the content we use include color, texture, shape, position, and dominant edges of image objects and regions. Potential applications include medical (Give me other images that contain a tumor with a texture like this one), photo-journalism (Give me images that have blue at the top and red at the bottom), and many others in art, fashion, cataloging, retailing, and industry. We describe a set of novel features and similarity measures allowing query by image content, together with the QBIC system we implemented. We demonstrate the effectiveness of our system with normalized precision and recall experiments on test databases containing over 1000 images and 1000 objects populated from commercially available photo clip art images, and of images of airplane silhouettes. We also present new methods for efficient processing of QBIC queries that consist of filtering and indexing steps. We specifically address two problems: (a) non Euclidean distance measures; and (b) the high dimensionality of feature vectors. For the first problem, we introduce a new theorem that makes efficient filtering possible by bounding the non-Euclidean, full cross-term quadratic distance expression with a simple Euclidean distance. For the second, we illustrate how orthogonal transforms, such as Karhunen Loeve, can help reduce the dimensionality of the search space. Our methods are general and allow some false hits but no false dismissals. The resulting QBIC system offers effective retrieval using image content, and for large image databases significant speedup over straightforward indexing alternatives. The system is implemented in X/Motif and C running on an RS/6000.On sabbatical from Univ. of Maryland, College Park. His work was partially supported by SRC, by the National Science Foundation under the grant IRI-8958546 (PYI).  相似文献   

5.
This paper considers the problem of quantifying literary style and looks at several variables which may be used as stylistic fingerprints of a writer. A review of work done on the statistical analysis of change over time in literary style is then presented, followed by a look at a specific application area, the authorship of Biblical texts.David Holmes is a Principal Lecturer in Statistics at the University of the West of England, Bristol with specific responsibility for co-ordinating the research programmes in the Department of Mathematical Sciences. He has taught literary style analysis to humanities students since 1983 and has published articles on the statistical analysis of literary style in theJournal of the Royal Statistical Society, History and Computing, andLiterary and Linguistic Computing. He presented papers at the ACH/ALLC conferences in 1991 and 1993.  相似文献   

6.
The paper introduces the concept of Computer-based Informated Environments (CBIEs) to indicate an emergent form of work organisation facilitated by information technology. It first addresses the problem of inconsistent meanings of the informate concept in the literature, and it then focuses on those cases which, it is believed, show conditions of plausible informated environments. Finally, the paper looks at those factors that when found together contribute to building a CBIE. It makes reference to CBIEs as workplaces that comprise a non-technocentric perspective and questions whether CBIEs truly represent an anthropocentric route of information technology.  相似文献   

7.
3-D interpretation of optical flow by renormalization   总被引:5,自引:2,他引:3  
This article studies 3-D interpretation of optical flow induced by a general camera motion relative to a surface of general shape. First, we describe, using the image sphere representation, an analytical procedure that yields an exact solution when the data are exact: we solve theepipolar equation written in terms of theessential parameters and thetwisted optical flow. Introducing a simple model of noise, we then show that the solution is statistically biased. In order to remove the statistical bias, we propose an algorithm calledrenormalization, which automatically adjusts to unknown image noise. A brief discussion is also given to thecritical surface that yields ambiguous 3-D interpretations and the use of theimage plane representation.  相似文献   

8.
This paper presents a detailed study of Eurotra Machine Translation engines, namely the mainstream Eurotra software known as the E-Framework, and two unofficial spin-offs – the C,A,T and Relaxed Compositionality translator notations – with regard to how these systems handle hard cases, and in particular their ability to handle combinations of such problems. In the C,A,T translator notation, some cases of complex transfer are wild, meaning roughly that they interact badly when presented with other complex cases in the same sentence. The effect of this is that each combination of a wild case and another complex case needs ad hoc treatment. The E-Framework is the same as the C,A,T notation in this respect. In general, the E-Framework is equivalent to the C,A,T notation for the task of transfer. The Relaxed Compositionality translator notation is able to handle each wild case (bar one exception) with a single rule even where it appears in the same sentence as other complex cases.  相似文献   

9.
A variational approach for image binarization is discussed in this paper. The approach is based on the interpolation of surface. This interpolation is computed using edge points as interpolating points and minimizing an energy functional which interpolates a smooth threshold surface. A globally convergent Sequential Relaxation Algorithm (SRA) is proposed for solving the optimization problem. Moreover, our algorithm is also formulated in a multi-scale framework. The performance of our method is demonstrated on a variety of real and synthetic images and compared with traditional techniques. Examples show that our method gives promising results.This research is partially supported by HKBU Faculty Research Grant FRG/02-03/II-04 and NSF of China Grant. C.S. Tong received a BA degree in Mathematics and a Ph.D. degree (on Mathematical Modelling of Intermolecular Forces) both from Cambridge University. After graduation, he joined the Signal and Image Processing division of GEC-Marconis Hirst Research Centre as a Research Scientist, working on image restoration and fractal image compression. He then moved to the Department of Mathematics at Hong Kong Baptist University in 1992, becoming Associate Professor since 2002.He is a member of the IEEE, a Fellow of the Institute of Mathematics and Its Application, and a Chartered Mathematician. His current research interests include image processing, fractal image compression, and neural networks. Yongping Zhang received the M. S. degree from Department of Mathematics at Shaanxi Normal University, Xian, China, in 1988 and received the Ph.D. degree from The Institute of Artificial Intelligence and Robotics at Xian Jiaotong University, Xian, China, in 1998.In 1988 he joined Department of Mathematics at Shaanxi Normal University, where he became Associate Professor in July 1987. He held postdoctoral position at Northwestern Polytechnic University during the 1999–2000 academic years. Currently he is a research associate in the Bioengineering Institute at the University of Auckland, New Zealand. His research interests are in Computer Vision and Pattern Recognition, and include Wavelets, Neural Networks, PDE methods and variational methods for image processing. Nanning Zheng received the M.S. degree from Xian Jiaotong University, Xian, China, in 1981 and the Ph.D. degree from Keio University, Japan, in 1985. He is an academician of Chinese Engineer Academy, and currently a Professor at Xian Jiaotong University. His research interest includes Signal Processing, Machine Vision and Image Processing, Pattern Recognition and Virtual Reality.This revised version was published online in June 2005 with correction to CoverDate  相似文献   

10.
Concept learning in robotics is an extremely challenging problem: sensory data is often high dimensional, and noisy due to specularities and other irregularities. In this paper, we investigate two general strategies to speed up learning, based on spatial decomposition of the sensory representation, and simultaneous learning of multiple classes using a shared structure. We study two concept learning scenarios: a hallway navigation problem, where the robot has to induce features such as opening or wall. The second task is recycling, where the robot has to learn to recognize objects, such as a trash can. We use a common underlying function approximator in both studies in the form of a feedforward neural network, with several hundred input units and multiple output units. Despite the high degree of freedom afforded by such an approximator, we show the two strategies provide sufficient bias to achieve rapid learning. We provide detailed experimental studies on an actual mobile robot called PAVLOV to illustrate the effectiveness of this approach.  相似文献   

11.
Linear scale-space   总被引:6,自引:0,他引:6  
The formulation of afront-end orearly vision system is addressed, and its connection with scale-space is shown. A front-end vision system is designed to establish a convenient format of some sampled scalar field, which is suited for postprocessing by various dedicated routines. The emphasis is on the motivations and implications of symmetries of the environment; they pose natural, a priori constraints on the design of a front-end.The focus is on static images, defined on a multidimensional spatial domain, for which it is assumed that there are no a priori preferred points, directions, or scales. In addition, the front-end is required to be linear. These requirements are independent of any particular image geometry and express the front-end's pure syntactical, bottom up nature.It is shown that these symmetries suffice to establish the functionality properties of a front-end. For each location in the visual field and each inner scale it comprises a hierarchical family of tensorial apertures, known as the Gaussian family, the lowest order of which is the normalised Gaussian. The family can be truncated at any given order in a consistent way. The resulting set constitutes a basis for alocal jet bundle. Note that scale-space theory shows up here without any call upon the prohibition of spurious detail, which, in some way or another, usually forms the basic starting point for diffusion-like scale-space theories.  相似文献   

12.
Suppose a directed graph has its arcs stored in secondary memory, and we wish to compute its transitive closure, also storing the result in secondary memory. We assume that an amount of main memory capable of holdings values is available, and thats lies betweenn, the number of nodes of the graph, ande, the number of arcs. The cost measure we use for algorithms is theI/O complexity of Kung and Hong, where we count 1 every time a value is moved into main memory from secondary memory, or vice versa.In the dense case, wheree is close ton 2, we show that I/O equal toO(n 3/s) is sufficient to compute the transitive closure of ann-node graph, using main memory of sizes. Moreover, it is necessary for any algorithm that is standard, in a sense to be defined precisely in the paper. Roughly, standard means that paths are constructed only by concatenating arcs and previously discovered paths. For the sparse case, we show that I/O equal toO(n 2e/s) is sufficient, although the algorithm we propose meets our definition of standard only if the underlying graph is acyclic. We also show that(n 2e/s) is necessary for any standard algorithm in the sparse case. That settles the I/O complexity of the sparse/acyclic case, for standard algorithms. It is unknown whether this complexity can be achieved in the sparse, cyclic case, by a standard algorithm, and it is unknown whether the bound can be beaten by nonstandard algorithms.We then consider a special kind of standard algorithm, in which paths are constructed only by concatenating arcs and old paths, never by concatenating two old paths. This restriction seems essential if we are to take advantage of sparseness. Unfortunately, we show that almost another factor ofn I/O is necessary. That is, there is an algorithm in this class using I/OO(n 3e/s) for arbitrary sparse graphs, including cyclic ones. Moreover, every algorithm in the restricted class must use(n 3e/s/log3 n) I/O, on some cyclic graphs.The work of this author was partially supported by NSF grant IRI-87-22886, IBM contract 476816, Air Force grant AFOSR-88-0266 and a Guggenheim fellowship.  相似文献   

13.
The language of standard propositional modal logic has one operator ( or ), that can be thought of as being determined by the quantifiers or , respectively: for example, a formula of the form is true at a point s just in case all the immediate successors of s verify .This paper uses a propositional modal language with one operator determined by a generalized quantifier to discuss a simple connection between standard invariance conditions on modal formulas and generalized quantifiers: the combined generalized quantifier conditions of conservativity and extension correspond to the modal condition of invariance under generated submodels, and the modal condition of invariance under bisimulations corresponds to the generalized quantifier being a Boolean combination of and .  相似文献   

14.
Domain truncation is the simple strategy of solving problems ony [-, ] by using a large but finite computational interval, [– L, L] Sinceu(y) is not a periodic function, spectral methods have usually employed a basis of Chebyshev polynomials,T n(y/L). In this note, we show that becauseu(±L) must be very, very small if domain truncation is to succeed, it is always more efficient to apply a Fourier expansion instead. Roughly speaking, it requires about 100 Chebyshev polynomials to achieve the same accuracy as 64 Fourier terms. The Fourier expansion of a rapidly decaying but nonperiodic function on a large interval is also a dramatic illustration of the care that is necessary in applying asymptotic coefficient analysis. The behavior of the Fourier coefficients in the limitn for fixed intervalL isnever relevant or significant in this application.  相似文献   

15.
Earlier work on scheduling by autonomous systems has demonstrated that schedules in the form of simple temporal networks, with intervals of values for possible event-times, can be made dispatchable, i.e. executable incrementally in real time with guarantees against failure due to unfortunate event-time selections. In this work we show how the property of dispatchability can be extended to networks that include constraints for consumable resources. We first determine conditions for insuring that resource use does not exceed capacity under dispatchable execution for a single sequence of activities, or bout, involving one resource. Then we show how to handle interactions between resource and temporal constraints to insure dispatchability, how to enhance flexibility of resource use under these conditions, and how to handle multiple bouts interspersed with instances of resource release. Finally, we consider methods for establishing the necessary dispatchability conditions during schedule creation (planning stage). The results demonstrate that flexible handling of resource use can be safely extended to the execution layer to provide more effective deployment of consumable resources.  相似文献   

16.
On Bounding Solutions of Underdetermined Systems   总被引:1,自引:0,他引:1  
Sufficient conditions for the existence and uniqueness of a solution x* D (R n ) of Y(x) = 0 where : R n R m (m n) with C 2(D) where D R n is an open convex set and Y = (x)+ are given, and are compared with similar results due to Zhang, Li and Shen (Reliable Computing 5(1) (1999)). An algorithm for bounding zeros of f (·) is described, and numerical results for several examples are given.  相似文献   

17.
The AI methodology of qualitative reasoning furnishes useful tools to scientists and engineers who need to deal with incomplete system knowledge during design, analysis, or diagnosis tasks. Qualitative simulators have a theoretical soundness guarantee; they cannot overlook any concrete equation implied by their input. On the other hand, the basic qualitative simulation algorithms have been shown to suffer from the incompleteness problem; they may allow non-solutions of the input equation to appear in their output. The question of whether a simulator with purely qualitative input which never predicts spurious behaviors can ever be achieved by adding new filters to the existing algorithm has remained unanswered. In this paper, we show that, if such a sound and complete simulator exists, it will have to be able to handle numerical distinctions with such a high precision that it must contain a component that would better be called a quantitative, rather than qualitative reasoner. This is due to the ability of the pure qualitative format to allow the exact representation of the members of a rich set of numbers.  相似文献   

18.
In this essay I will consider two theses that are associated with Frege,and will investigate the extent to which Frege really believed them.Much of what I have to say will come as no surprise to scholars of thehistorical Frege. But Frege is not only a historical figure; he alsooccupies a site on the philosophical landscape that has allowed hisdoctrines to seep into the subconscious water table. And scholars in a widevariety of different scholarly establishments then sip from thesedoctrines. I believe that some Frege-interested philosophers at various ofthese establishments might find my conclusions surprising.Some of these philosophical establishments have arisen from an educationalmilieu in which Frege is associated with some specific doctrine at theexpense of not even being aware of other milieux where other specificdoctrines are given sole prominence. The two theses which I will discussillustrate this point. Each of them is called Frege's Principle, but byphilosophers from different milieux. By calling them milieux I do not want to convey the idea that they are each located at some specificsocio-politico-geographico-temporal location. Rather, it is a matter oftheir each being located at different places on the intellectuallandscape. For this reason one might (and I sometimes will) call them(interpretative) traditions.  相似文献   

19.
Regions-of-Interest and Spatial Layout for Content-Based Image Retrieval   总被引:1,自引:0,他引:1  
To date most content-based image retrieval (CBIR) techniques rely on global attributes such as color or texture histograms which tend to ignore the spatial composition of the image. In this paper, we present an alternative image retrieval system based on the principle that it is the user who is most qualified to specify the query content and not the computer. With our system, the user can select multiple regions-of-interest and can specify the relevance of their spatial layout in the retrieval process. We also derive similarity bounds on histogram distances for pruning the database search. This experimental system was found to be superior to global indexing techniques as measured by statistical sampling of multiple users' satisfaction ratings.  相似文献   

20.
In this paper we propose two new multilayer grid models for VLSI layout, both of which take into account the number of contact cuts used. For the first model in which nodes exist only on one layer, we prove a tight area × (number of contact cuts) = (n 2) tradeoff for embeddingn-node planar graphs of bounded degree in two layers. For the second model in which nodes exist simultaneously on all layers, we give a number of upper bounds on the area needed to embed groups using no contact cuts. We show that anyn-node graph of thickness 2 can be embedded on two layers inO(n 2) area. This bound is tight even if more layers and any number of contact cuts are allowed. We also show that planar graphs of bounded degree can be embedded on two layers inO(n 3/2(logn)2) area.Some of our embedding algorithms have the additional property that they can respect prespecified grid placements of the nodes of the graph to be embedded. We give an algorithm for embeddingn-node graphs of thicknessk ink layers usingO(n 3) area, using no contact cuts, and respecting prespecified node placements. This area is asymptotically optimal for placement-respecting algorithms, even if more layers are allowed, as long as a fixed fraction of the edges do not use contact cuts. Our results use a new result on embedding graphs in a single-layer grid, namely an embedding ofn-node planar graphs such that each edge makes at most four turns, and all nodes are embedded on the same line.The first author's research was partially supported by NSF Grant No. MCS 820-5167.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号