首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We describe a mechanism called SpaceGlue for adaptively locating services based on the preferences and locations of users in a distributed and dynamic network environment. In SpaceGlue, services are bound to physical locations, and a mobile user accesses local services depending on the current space he/she is visiting. SpaceGlue dynamically identifies the relationships between different spaces and links or “glues” spaces together depending on how previous users moved among them and used those services. Once spaces have been glued, users receive a recommendation of remote services (i.e., services provided in a remote space) reflecting the preferences of the crowd of users visiting the area. The strengths of bonds are implicitly evaluated by users and adjusted by the system on the basis of their evaluation. SpaceGlue is an alternative to existing schemes such as data mining and recommendation systems and it is suitable for distributed and dynamic environments. The bonding algorithm for SpaceGlue incrementally computes the relationships or “bonds” between different spaces in a distributed way. We implemented SpaceGlue using a distributed network application platform Ja-Net and evaluated it by simulation to show that it adaptively locates services reflecting trends in user preferences. By using “Mutual Information (MI)” and “F-measure” as measures to indicate the level of such trends and the accuracy of service recommendation, the simulation results showed that (1) in SpaceGlue, the F-measure increases depending on the level of MI (i.e., the more significant the trends, the greater the F-measure values), (2) SpaceGlue achives better precision and F-measure than “Flooding case (i.e., every service information is broadcast to everybody)” and “No glue case” by narrowing appropriate partners to send recommendations based on bonds, and (3) SpaceGlue achieves better F-measure with large number of spaces and users than other cases (i.e., “flooding” and “no glue”). Tomoko Itao is an alumna of NTT Network Innovation Laboratories  相似文献   

2.
The minority game (MG) comes from the so-called “El Farol bar” problem by W.B. Arthur. The underlying idea is competition for limited resources and it can be applied to different fields such as: stock markets, alternative roads between two locations and in general problems in which the players in the “minority” win. Players in this game use a window of the global history for making their decisions, we propose a neural networks approach with learning algorithms in order to determine players strategies. We use three different algorithms to generate the sequence of minority decisions and consider the prediction power of a neural network that uses the Hebbian algorithm. The case of sequences randomly generated is also studied. Research supported by Local Project 2004–2006 (EX 40%) Università di Foggia. A. Sfrecola is a researcher financially supported by Dipartimento di Scienze Economiche, Matematiche e Statistiche, Università degli Studi di Foggia, Foggia, Italy.  相似文献   

3.
Perhaps the most flexible synopsis of a database is a uniform random sample of the data; such samples are widely used to speed up processing of analytic queries and data-mining tasks, enhance query optimization, and facilitate information integration. The ability to bound the maximum size of a sample can be very convenient from a system-design point of view, because the task of memory management is simplified, especially when many samples are maintained simultaneously. In this paper, we study methods for incrementally maintaining a bounded-size uniform random sample of the items in a dataset in the presence of an arbitrary sequence of insertions and deletions. For “stable” datasets whose size remains roughly constant over time, we provide a novel sampling scheme, called “random pairing” (RP), that maintains a bounded-size uniform sample by using newly inserted data items to compensate for previous deletions. The RP algorithm is the first extension of the 45-year-old reservoir sampling algorithm to handle deletions; RP reduces to the “passive” algorithm of Babcock et al. when the insertions and deletions correspond to a moving window over a data stream. Experiments show that, when dataset-size fluctuations over time are not too extreme, RP is the algorithm of choice with respect to speed and sample-size stability. For “growing” datasets, we consider algorithms for periodically resizing a bounded-size random sample upwards. We prove that any such algorithm cannot avoid accessing the base data, and provide a novel resizing algorithm that minimizes the time needed to increase the sample size. We also show how to merge uniform samples from disjoint datasets to obtain a uniform sample of the union of the datasets; the merged sample can be incrementally maintained. Our new RPMerge algorithm extends the HRMerge algorithm of Brown and Haas to effectively deal with deletions, thereby facilitating efficient parallel sampling.  相似文献   

4.
The electromagnetic field commutation relations are defined in terms of geometric factors that are double averages over two finite four-dimensional space-time regions. The square root of any of the uncertainty relations derived from the aforementioned commutators is taken as a critical field, in the sense that any electromagnetic field much larger than it can be treated as classical. Another critical electromagnetic field associated with the quantum information control of vacuum fluctuations can be chosen as the square root of the mean quadratic fluctuation of each quantity of electromagnetic field, when the number of photons is defined and is equal to zero. Any electromagnetic field expectation value could be measured if it is much greater than the last critical field. This article covers a magnitude order comparison between the critical fields and its consequences for measuring the electromagnetic field information. Presented at the 38th Symposium on Mathematical Physics “Quantum Entanglement & Geometry”, Toruń, June 4-7, 2006.  相似文献   

5.
The individual differences in the repeat count of several bases, short tandem repeat (STR), among all of the deoxyribonucleic acid (DNA) base sequences, can be used as unique DNA information for a personal identification (ID). We propose a method to generate a personal identifier (hereafter referred to as a “DNA personal ID”) by specifying multiple STR locations (called “loci”) and then sequencing the repeat count information. We also conducted a validation experiment to verify the proposed principle based on actual DNA data. We verified that the matching probability of DNA personal IDs becomes exponentially smaller, to about 10-n, as n stages of loci are used and that no correlation exists among the loci. Next, we considered the various issues that will be encountered when applying DNA personal IDs to information security systems, such as biometric personal authentication systems. Published online: 9 April 2002  相似文献   

6.
 The analysis of the measurement uncertainty of piezoresistive sensors should be performed according to the ISO “Guide to the Expression of Uncertainty in Measurement” (GUM) and the requirement document EAL-R2 of the `European co-operation for Accreditation of Laboratories'. In this work, the derivation of a model of the physical relationships between quantities in the respective measurement is demonstrated for the case of a piezoresistive sensor. The model is used to evaluate the uncertainty in measurement of semiconductor piezoresistive sensors. Received: 29 June 2001/Accepted: 25 July 2001 This work was supported by the Centre of Postgraduate Studies “Sensorics” at Dresden University of Technology funded by the German Research Council (DFG). This paper was presented at the Conference of Micro System Technologies 2001 in March 2001.  相似文献   

7.
We present a new approach to the tracking of very non-rigid patterns of motion, such as water flowing down a stream. The algorithm is based on a “disturbance map”, which is obtained by linearly subtracting the temporal average of the previous frames from the new frame. Every local motion creates a disturbance having the form of a wave, with a “head” at the present position of the motion and a historical “tail” that indicates the previous locations of that motion. These disturbances serve as loci of attraction for “tracking particles” that are scattered throughout the image. The algorithm is very fast and can be performed in real time. We provide excellent tracking results on various complex sequences, using both stabilized and moving cameras, showing a busy ant column, waterfalls, rapids and flowing streams, shoppers in a mall, and cars in a traffic intersection. Received: 24 June 1997 / Accepted: 30 July 1998  相似文献   

8.
Fire spread modelling in landscape fire succession models needs to improve to handle uncertainty under global change processes and the resulting impact on forest systems. Linking fire spread patterns to synoptic-scale weather situations are a promising approach to simulating fire spread without fine-grained weather data. Here we present MedSpread—a model that evaluates the weights of five landscape factors in fire spread performance. We readjusted the factor weights for convective, topography-driven and wind-driven fires (n = 123) and re-assessed each fire spread group's performance against seven other control simulations. Results show that for each of the three fire spread patterns, some landscape factors exert a higher influence on fire spread simulation than others. We also found strong evidence that separating fires by fire spread pattern improves model performances. This study shows a promising link between relevant fire weather information, fire spread and fire regime simulation under global change processes.  相似文献   

9.
 The environmental data are in general imprecise and uncertain, but they are located in space and therefore obey to spatial constraints. The “spatial analysis” is a (natural) reasoning process through which geographers take advantage of these constraints to reduce this uncertainty and to improve their beliefs. Trying to automate this process is a really hard problem. We propose here the design of a revision operator able to perform a spatial analysis in the context of one particular “application profile”: it identifies objects bearing a same variable bound through local constraints. The formal background, on which this operator is built, is a decision algorithm from Reiter [9]; then the heuristics, which help this algorithm to become tractable on a true scale application, are special patterns for clauses and “spatial confinement” of conflicts. This operator is “anytime”, because it uses “samples” and works on small (tractable) blocks, it reaggregates the partial revision results on larger blocks, thus we name it a “hierarchical block revision” operator. Finally we illustrate a particular application: a flooding propagation. Of course this is among possible approaches of “soft-computing” for geographic applications. On leave at: Centre de Recherche en Géomatique Pavillon Casault, Université Laval Québec, Qc, Canada – G1K 7P4 Université de Toulon et du Var, Avenue de l'Université, BP 132, 83957 La Garde Cedex, France This work is currently supported by the European Community under the IST-1999-14189 project.  相似文献   

10.
The current work is focused on the implementation of a robust multimedia application for watermarking digital images, which is based on an innovative spread spectrum analysis algorithm for watermark embedding and on a content-based image retrieval technique for watermark detection. The existing highly robust watermark algorithms are applying “detectable watermarks” for which a detection mechanism checks if the watermark exists or not (a Boolean decision) based on a watermarking key. The problem is that the detection of a watermark in a digital image library containing thousands of images means that the watermark detection algorithm is necessary to apply all the keys to the digital images. This application is non-efficient for very large image databases. On the other hand “readable” watermarks may prove weaker but easier to detect as only the detection mechanism is required. The proposed watermarking algorithm combine’s the advantages of both “detectable” and “readable” watermarks. The result is a fast and robust multimedia application which has the ability to cast readable multibit watermarks into digital images. The watermarking application is capable of hiding 214 different keys into digital images and casting multiple zero-bit watermarks onto the same coefficient area while maintaining a sufficient level of robustness.  相似文献   

11.
Opinion helpfulness prediction in the presence of “words of few mouths”   总被引:1,自引:0,他引:1  
This paper identifies a widely existing phenomenon in social media content, which we call the “words of few mouths” phenomenon. This phenomenon challenges the development of recommender systems based on users’ online opinions by presenting additional sources of uncertainty. In the context of predicting the “helpfulness” of a review document based on users’ online votes on other reviews (where a user’s vote on a review is either HELPFUL or UNHELPFUL), the “words of few mouths” phenomenon corresponds to the case where a large fraction of the reviews are each voted only by very few users. Focusing on the “review helpfulness prediction” problem, we illustrate the challenges associated with the “words of few mouths” phenomenon in the training of a review helpfulness predictor. We advocate probabilistic approaches for recommender system development in the presence of “words of few mouths”. More concretely, we propose a probabilistic metric as the training target for conventional machine learning based predictors. Our empirical study using Support Vector Regression (SVR) augmented with the proposed probability metric demonstrates advantages of incorporating probabilistic methods in the training of the predictors. In addition to this “partially probabilistic” approach, we also develop a logistic regression based probabilistic model and correspondingly a learning algorithm for review helpfulness prediction. We demonstrate experimentally the superior performance of the logistic regression method over SVR, the prior art in review helpfulness prediction.  相似文献   

12.
This work is part of a project aimed to develop automotive real-time observers based on detailed nonlinear multibody models and the extended Kalman filter (EKF). In previous works, a four-bar mechanism was studied to get insight into the problem. Regarding the formulation of the equations of motion, it was concluded that the state-space reduction method known as matrix-R is the most suitable one for this application. Regarding the sensors, it was shown that better stability, accuracy and efficiency are obtained as the sensored magnitude is a lower derivative and when it is a generalized coordinate of the problem. In the present work, the automotive problem has been addressed, through the selection of a Volkswagen Passat as a case-study. A model of the car containing fifteen degrees of freedom has been developed. The observer algorithm that combines the equations of motion and the integrator has been reformulated so that duplication of the problem size is avoided, in order to improve efficiency. A maneuver of acceleration from rest and double lane change has been defined, and tests have been run for the “prototype,” the “model” and the “observer,” all the three computational, with the model having 100 kg more than the prototype. Results have shown that good convergence is obtained for position level sensors, but the computational cost is high, still far from real-time performance.  相似文献   

13.
A framework for “improvisational” social acts and communication is introduced by referring to the idea of “relationalism” such as natural farming, permaculture and deep ecology. Based on this conception, the notion of Existential Graph by C. S. Peirce is introduced. The notion of extended self in deep ecology is substantiated based on the Roy Adaptation Model in Nursing Theory and Narrative approaches. By focusing on Leibnizian notions of space and time and by introducing Petri net, a spatio-temporal model of improvisation is constructed. This model is expected to substantiate the interesting notion of “Ba” proposed by H. Shimizu reflecting Japanese culture.  相似文献   

14.
The capability-based distributed layout approach was first proposed by Baykasoğlu (Int J Prod Res 41, 2597–2618, 2003) for job shops which are working under highly volatile manufacturing environments in order to avoid high reconfiguration costs. It was argued that the capability-based distributed layout can also be a valid (or better) option for “classical functional layouts” which are generally operating under “high variety” and “low-stable demand”. In this paper first the capability-based distributed layout approach and related issues are reviewed and discussed afterwards the performance of “Capability Based Distributed Layout: (CB-DL)” is tested via extensive simulation experiments. After the simulation experiments, it is observed that capability-based distributed layout has a big potential and can also be considered as an alternative to classical process types of layouts.  相似文献   

15.
Ensuring causal consistency in a Distributed Shared Memory (DSM) means all operations executed at each process will be compliant to a causality order relation. This paper first introduces an optimality criterion for a protocol P, based on a complete replication of variables at each process and propagation of write updates, that enforces causal consistency. This criterion measures the capability of a protocol to update the local copy as soon as possible while respecting causal consistency. Then we present an optimal protocol built on top of a reliable broadcast communication primitive and we show how previous protocols based on complete replication presented in the literature are not optimal. Interestingly, we prove that the optimal protocol embeds a system of vector clocks which captures the read/write semantics of a causal memory. From an operational point of view, an optimal protocol strongly reduces its message buffer overhead. Simulation studies show that the optimal protocol roughly buffers a number of messages of one order of magnitude lower than non-optimal ones based on the same communication primitive. R. Baldoni Roberto Baldoni is a Professor of Distributed Systems at the University of Rome “La Sapienza”. He published more than one hundred papers (from theory to practice) in the fields of distributed and mobile computing, middleware platforms and information systems. He is the founder of MIDdleware LABoratory <://www.dis.uniroma1.it/&dollar;∼midlab> textgreater (MIDLAB) whose members participate to national and european research projects. He regularly serves as an expert for the EU commission in the evaluation of EU projects. Roberto Baldoni chaired the program committee of the “distributed algorithms” track of the 19th IEEE International Conference on Distributed Computing Systems (ICDCS-99) and /he was PC Co-chair of the ACM International Workshop on Principles of Mobile Computing/ (POMC). He has been also involved in the organizing and program committee of many premiership international conferences and workshops. A. Milani Alessia Milani is currently involved in a joint research doctoral thesis between the Department of Computer and Systems Science of the University of Rome “La Sapienza” and the University of Rennes I, IRISA.She earned a Laurea degree in Computer Engineering at University of Rome “La Sapienza” on May 2003. Her research activity involves the area of distributed systems. Her current research interests include communication paradigms, in particular distributed shared memories, replication and consistency criterions. S. Tucci Piergiovanni Sara Tucci Piergiovanni is currently a Ph.D. Student at the Department of Computer and Systems Science of the University of Rome “La Sapienza”.She earned a Laurea degree in Computer Engineering at University of Rome “La Sapienza” on March 2002 with marks 108/110. Her laurea thesis has been awarded the italian national “Federcommin-AICA” prize 2002 for best laurea thesis in Information Technology. Her research activity involves the area of distributed systems. Early works involved the issue of fault-tolerance in asynchronous systems and software replication. Currently, her main focus is on communication paradigms that provide an “anonymous” communication as publish/subscribe and distributed shared memories. The core contributions are several papers published in international conferences and journals.  相似文献   

16.
Flux-based level set method is a finite volume method for the numerical solution of advective level set equation which describes the transport of level lines by an external velocity field and by a speed in normal direction. The method is introduced for rectangular grids without using any dimensional splitting. The second order accurate discretization scheme is derived with one free parameter in the definition of piecewise linear reconstruction. A significant improvement of the accuracy can be obtained by special choices of this parameter for two benchmark examples of linear conservation laws. Moreover, the flux-based level set method can be used for computation of first arrival time functions in the class of problems such as “boat sailing” or “fire spread” where some external velocity field (e.g., a water or wind flow) must be considered additionally to the speed of boat or fire. Numerical experiments confirm very good performance of the method for this type of problems.  相似文献   

17.
In this paper, we consider the problem of predicting a large scale spatial field using successive noisy measurements obtained by mobile sensing agents. The physical spatial field of interest is discretized and modeled by a Gaussian Markov random field (GMRF) with uncertain hyperparameters. From a Bayesian perspective, we design a sequential prediction algorithm to exactly compute the predictive inference of the random field. The main advantages of the proposed algorithm are: (1) the computational efficiency due to the sparse structure of the precision matrix, and (2) the scalability as the number of measurements increases. Thus, the prediction algorithm correctly takes into account the uncertainty in hyperparameters in a Bayesian way and is also scalable to be usable for mobile sensor networks with limited resources. We also present a distributed version of the prediction algorithm for a special case. An adaptive sampling strategy is presented for mobile sensing agents to find the most informative locations in taking future measurements in order to minimize the prediction error and the uncertainty in hyperparameters simultaneously. The effectiveness of the proposed algorithms is illustrated by numerical experiments.  相似文献   

18.
Methods of synchronizing interaction of the digital devices of distributed systems with the use of a common center relaying the signals from the devices were proposed. They are mostly intended to perform operations like “all-to-all,” “all-to-one,” and “one-to-all.” The center substantially accelerates synchronization and improves efficiency of the communication facilities interconnecting the devices.  相似文献   

19.
Optic Flow in Harmony   总被引:1,自引:0,他引:1  
Most variational optic flow approaches just consist of three constituents: a data term, a smoothness term and a smoothness weight. In this paper, we present an approach that harmonises these three components. We start by developing an advanced data term that is robust under outliers and varying illumination conditions. This is achieved by using constraint normalisation, and an HSV colour representation with higher order constancy assumptions and a separate robust penalisation. Our novel anisotropic smoothness is designed to work complementary to the data term. To this end, it incorporates directional information from the data constraints to enable a filling-in of information solely in the direction where the data term gives no information, yielding an optimal complementary smoothing behaviour. This strategy is applied in the spatial as well as in the spatio-temporal domain. Finally, we propose a simple method for automatically determining the optimal smoothness weight. This method bases on a novel concept that we call “optimal prediction principle” (OPP). It states that the flow field obtained with the optimal smoothness weight allows for the best prediction of the next frames in the image sequence. The benefits of our “optic flow in harmony” (OFH) approach are demonstrated by an extensive experimental validation and by a competitive performance at the widely used Middlebury optic flow benchmark.  相似文献   

20.
Multi-Class Segmentation with Relative Location Prior   总被引:2,自引:0,他引:2  
Multi-class image segmentation has made significant advances in recent years through the combination of local and global features. One important type of global feature is that of inter-class spatial relationships. For example, identifying “tree” pixels indicates that pixels above and to the sides are more likely to be “sky” whereas pixels below are more likely to be “grass.” Incorporating such global information across the entire image and between all classes is a computational challenge as it is image-dependent, and hence, cannot be precomputed. In this work we propose a method for capturing global information from inter-class spatial relationships and encoding it as a local feature. We employ a two-stage classification process to label all image pixels. First, we generate predictions which are used to compute a local relative location feature from learned relative location maps. In the second stage, we combine this with appearance-based features to provide a final segmentation. We compare our results to recent published results on several multi-class image segmentation databases and show that the incorporation of relative location information allows us to significantly outperform the current state-of-the-art.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号