首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we present indoor positioning within unknown environments as an unsupervised labelling task on sequential data. We explore a probabilistic framework relying on wireless network radio signals and contextual information, which is increasingly available in large environments. Thus, we form an informative spatial classifier without resorting to a pre-determined map, and show the potential of the approach using both simulated and real data sets.Results demonstrate the ability of the procedure to segregate structures of radio signal observations and form clustered regions in association to areas of interest to the user; thus, we show it is possible to differentiate location between closely spaced zones of variable size and shape.  相似文献   

2.
在车载通信系统中,车辆的位置信息泄露会危及驾驶员的隐私安全,而基于混合区中车辆的假名更新是实现位置隐私保护的一种有效方法。然而,现有的一些混合区方案忽略了车辆密度变化对位置隐私保护效果的影响。针对此问题,本文提出了一种支持虚拟车辆辅助假名更新的混合区位置隐私保护方案。该方案旨在根据周围合作车辆的密度不同来动态调整生成所需的虚拟车辆,并广播它们的踪迹,使攻击者无法区分虚拟车辆和真实车辆,从而实现车辆的位置隐私保护。仿真实验结果表明,该方案通过引入虚拟车辆信息,使攻击者无法区分虚拟车辆和真实车辆,有效降低了车辆真实位置或轨迹泄露的可能性,同时提高了交通密度较低情况下的位置隐私保护效果。  相似文献   

3.
Data Grid integrates graphically distributed resources for solving data intensive scientific applications. Effective scheduling in Grid can reduce the amount of data transferred among nodes by submitting a job to a node, where most of the requested data files are available. Scheduling is a traditional problem in parallel and distributed system. However, due to special issues and goals of Grid, traditional approach is not effective in this environment any more. Therefore, it is necessary to propose methods specialized for this kind of parallel and distributed system. Another solution is to use a data replication strategy to create multiple copies of files and store them in convenient locations to shorten file access times. To utilize the above two concepts, in this paper we develop a job scheduling policy, called hierarchical job scheduling strategy (HJSS), and a dynamic data replication strategy, called advanced dynamic hierarchical replication strategy (ADHRS), to improve the data access efficiencies in a hierarchical Data Grid. HJSS uses hierarchical scheduling to reduce the search time for an appropriate computing node. It considers network characteristics, number of jobs waiting in queue, file locations, and disk read speed of storage drive at data sources. Moreover, due to the limited storage capacity, a good replica replacement algorithm is needed. We present a novel replacement strategy which deletes files in two steps when free space is not enough for the new replica: first, it deletes those files with minimum time for transferring. Second, if space is still insufficient then it considers the last time the replica was requested, number of access, size of replica and file transfer time. The simulation results show that our proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, number of intercommunications, number of replications, hit ratio, computing resource usage and storage usage.  相似文献   

4.
Data Grid integrates graphically distributed resources for solving data intensive scientific applications. Effective scheduling in Grid can reduce the amount of data transferred among nodes by submitting a job to a node, where most of the requested data files are available. Scheduling is a traditional problem in parallel and distributed system. However, due to special issues and goals of Grid, traditional approach is not effective in this environment any more. Therefore, it is necessary to propose methods specialized for this kind of parallel and distributed system. Another solution is to use a data replication strategy to create multiple copies of files and store them in convenient locations to shorten file access times. To utilize the above two concepts, in this paper we develop a job scheduling policy, called hierarchical job scheduling strategy (HJSS), and a dynamic data replication strategy, called advanced dynamic hierarchical replication strategy (ADHRS), to improve the data access efficiencies in a hierarchical Data Grid. HJSS uses hierarchical scheduling to reduce the search time for an appropriate computing node. It considers network characteristics, number of jobs waiting in queue, file locations, and disk read speed of storage drive at data sources. Moreover, due to the limited storage capacity, a good replica replacement algorithm is needed. We present a novel replacement strategy which deletes files in two steps when free space is not enough for the new replica: first, it deletes those files with minimum time for transferring. Second, if space is still insufficient then it considers the last time the replica was requested, number of access, size of replica and file transfer time. The simulation results show that our proposed algorithm has better performance in comparison with other algorithms in terms of job execution time, number of intercommunications, number of replications, hit ratio, computing resource usage and storage usage.  相似文献   

5.
Data replication comprises a standard fault tolerance approach for systems-especially large-scale ones-that store and provide data over wide geographical and administrative areas. The major topics that the task of data replication covers include the replica creation, placement, relocation and retirement, replica consistency and replica access. In a business context a number of constraints exists which are set by the infrastructure, network and application capabilities in combination with the Quality of Service (QoS) requirements that hinder the effectiveness of data replication schemes. In this paper, we examine how this combination affects the replication lifecycle in Data Grids and we introduce a set of interoperable novel file replication algorithms that take into account the infrastructural constraints as well as the ‘importance’ of the data. The latter is approximated through a multi-parametric factor that encapsulates a set of data-specific parameters, such as popularity and content significance.  相似文献   

6.
This paper presents a new approach, namely Intelligent Fuzzy Online Location Management Strategy (IFOLMS), based on Fuzzy clustering techniques to solve the mobile location management problem. Using a Fuzzy location estimator in this technique, mobile users’ past movements are used in making future paging decisions by the network. IFOLMS has the potential to lead to massive savings in the number of network signal transactions that must be made to locate users. Performance of the proposed approach has been measured by using several test networks; it shows promising results — around 50% reduction in network cost — when compared to many of the existing location management techniques (including GSM). Results also provide new insights into the mobility management problem and its associated performance issues.  相似文献   

7.
The class of alpha-stable distributions is better for modeling impulsive noise than Gaussian distribution in array signal processing. After briefly introducing the statistical characteristics of stable distribution and the fractional lower order statistics, including the covariation and the fractional order correlation, this paper proposes a new FOC-ESPRIT method of 2-D direction finding based on the fractional order correlation and subspace technique for underwater 2-D source localization using a vector hydrophones array under alpha-stable noise conditions. A vector hydrophone comprises two or three spatially co-located, orthogonally oriented identical velocity hydrophones (each of which measures one Cartesian component of the underwater acoustical particle velocity vector-field) plus an optional pressure hydrophone. Simulation experiments show that the proposed method is robust in a wide range of characteristic exponent values of stable distribution. Its performances are better than those of the conventional second-order statistics based ESPRIT algorithm, furthermore, the fractional order correlation is more suitable than the covariation in practical applications.  相似文献   

8.
Spatially aggregated data is frequently used in geographical applications. Often spatial data analysis on aggregated data is performed in the same way as on exact data, which ignores the fact that we do not know the actual locations of the data. We here propose models and methods to take aggregation into account. For this we focus on the problem of locating clusters in aggregated data. More specifically, we study the problem of locating clusters in spatially aggregated health data. The data is given as a subdivision into regions with two values per region, the number of cases and the size of the population at risk. We formulate the problem as finding a placement of a cluster window of a given shape such that a cluster function depending on the population at risk and the cases is maximized. We propose area-based models to calculate the cases (and the population at risk) within a cluster window. These models are based on the areas of intersection of the cluster window with the regions of the subdivision. We show how to compute a subdivision such that within each cell of the subdivision the areas of intersection are simple functions. We evaluate experimentally how taking aggregation into account influences the location of the clusters found.  相似文献   

9.
提出在为不同类别的用户提供个性化位置服务时兼顾位置服务与隐私保护的目的,当用户好友对用户进行定位时,系统将在服务器中查找用户对其好友的隐私策略分组,并得到对应的隐私开放类别,对用户的位置信息进行处理,从而实现个性化位置隐私保护的目标。方案已在IOS和Java环境下实现。  相似文献   

10.
A dynamic p-median problem is considered. Demand is changing over a given time horizon and the facilities are built one at a time at given times. Once a new facility is built, some of the customers will use its services and some of the customers will patronize an existing facility. At any given time, customers patronize the closest facility. The problem is to find the best locations for the new facilities. The problem is formulated and the two facilities case is solved by a special algorithm. The general problem is solved using the standard mathematical programming code AMPL.  相似文献   

11.
在聚类过程中考虑到数据的非确定性,提出了一种改进的K-平均算法——FK-算法。FK-算法思想是减小总均方误差的期望值E(SSE),需特别说明的是对数据对象xi 采用在非确定区域内用非确定密度概率函数pdf f(xi)进行描述。用FK-算法对非确定运动模式的运动对象进行了分析,实验表明考虑数据的非确定因素,在聚类分析处理时有比较精确的结果。  相似文献   

12.
并行分区拣货系统储位优化设计   总被引:1,自引:0,他引:1  
主要讨论配送中心并行分区拣货系统的特性,在各分区拣货员拣货速度不同的情况下,提出储位指派算法,通过对品项在各分区间储位的安排以平衡各分区拣货员的作业量;根据拣货作业规则和优化目标,对相关模型及算法进行模拟测试以证明其有效性,为方法的选择与应用提供了依据。  相似文献   

13.
Dynamic SLAs management in service oriented environments   总被引:1,自引:0,他引:1  
The increasing adoption of service oriented architectures across different administrative domains, forces service providers to use effective mechanisms and strategies of resource management in order for them to be able to guarantee the quality levels their customers demands during service provisioning. Service level agreements (SLA) are the most common mechanism used to establish agreements on the quality of a service (QoS) between a service provider and a service consumer. The WS-Agreement specification, developed by the Open Grid Forum, is a Web Service protocol to establish agreements on the QoS level to be guaranteed in the provision of a service. The committed agreement cannot be modified during service provision and is effective until all activities pertaining to it are finished or until one of the signing party decides to terminate it. In B2B scenarios where several service providers are involved in the composition of a service, and each of them plays both the parts of provider and customer, several one-to-one SLAs need to be signed. In such a rigid context the global QoS of the final service can be strongly affected by any violation on each single SLA. In order to prevent such violations, SLAs need to adapt to any possible needs that might come up during service provision. In this work we focus on the WS-Agreement specification and propose to enhance the flexibility of its approach. We integrate new functionality to the protocol that enable the parties of a WS-Agreement to re-negotiate and modify its terms during the service provision, and show how a typical scenario of service composition can benefit from our proposal.  相似文献   

14.
This paper describes a technique for clustering homogeneously distributed data in a peer-to-peer environment like sensor networks. The proposed technique is based on the principles of the K-Means algorithm. It works in a localized asynchronous manner by communicating with the neighboring nodes. The paper offers extensive theoretical analysis of the algorithm that bounds the error in the distributed clustering process compared to the centralized approach that requires downloading all the observed data to a single site. Experimental results show that, in contrast to the case when all the data is transmitted to a central location for application of the conventional clustering algorithm, the communication cost (an important consideration in sensor networks which are typically equipped with limited battery power) of the proposed approach is significantly smaller. At the same time, the accuracy of the obtained centroids is high and the number of samples which are incorrectly labeled is also small.  相似文献   

15.
Power conservation and client waiting time reduction are two important aspects of data access efficiency in broadcast-based wireless communication systems. The intention of data access methods is to optimize client power consumption with the least possible overhead on client waiting time. We propose an adaptive data access method which builds on the strengths of indexing and hashing techniques. We show that this method exhibits better average performance over the well-known index tree-based access methods. A new performance model is also proposed. This model uses more realistic assessment criteria, based on the combination of access and tuning times, for evaluating wireless access methods. This new model provides a dynamic framework to express the degree of importance of access and tuning times in an application. Under this new model, the adaptive method performance also outperforms the other access methods in the majority of cases.  相似文献   

16.
Querying imprecise data in moving object environments   总被引:15,自引:0,他引:15  
In moving object environments, it is infeasible for the database tracking the movement of objects to store the exact locations of objects at all times. Typically, the location of an object is known with certainty only at the time of the update. The uncertainty in its location increases until the next update. In this environment, it is possible for queries to produce incorrect results based upon old data. However, if the degree of uncertainty is controlled, then the error of the answers to queries can be reduced. More generally, query answers can be augmented with probabilistic estimates of the validity of the answer. We study the execution of probabilistic range and nearest-neighbor queries. The imprecision in answers to queries is an inherent property of these applications due to uncertainty in data, unlike the techniques for approximate nearest-neighbor processing that trade accuracy for performance. Algorithms for computing these queries are presented for a generic object movement model and detailed solutions are discussed for two common models of uncertainty in moving object databases. We study the performance of these queries through extensive simulations.  相似文献   

17.
In wireless mobile environments, data broadcasting is an effective approach to disseminate information to mobile clients. In some applications, the access pattern of all the data can be represented by a weighted DAG. In this paper, we explore how to efficiently generate the broadcast schedule in a wireless environment for the data set having a weighted DAG access pattern. Such a broadcast schedule not only minimizes the access latency but also is a topological ordering of the DAG. Minimized access latency ensures the quality of service (QoS). We prove that it is NP-hard to find an optimal broadcast schedule and provide some heuristics. After giving an analysis for these heuristics on the latency and complexity, we implement all the proposed heuristics to compare their performance. Recommended by: Sunil Prabhakar  相似文献   

18.
On data management in pervasive computing environments   总被引:1,自引:0,他引:1  
This paper presents a framework to address new data management challenges introduced by data-intensive, pervasive computing environments. These challenges include a spatio-temporal variation of data and data source availability, lack of a global catalog and schema, and no guarantee of reconnection among peers due to the serendipitous nature of the environment. An important aspect of our solution is to treat devices as semiautonomous peers guided in their interactions by profiles and context. The profiles are grounded in a semantically rich language and represent information about users, devices, and data described in terms of "beliefs," "desires," and "intentions." We present a prototype implementation of this framework over combined Bluetooth and Ad Hoc 802.11 networks and present experimental and simulation results that validate our approach and measure system performance.  相似文献   

19.
This work describes our efforts in creating a general object interaction framework for dynamic collaborative virtual environments. Furthermore, we increase the realism of the interactive world by using a rigid body simulator to calculate all actor and object movements. The main idea behind our interactive platform is to construct a virtual world using only objects that contain their own interaction information. As a result, the object interactions are application independent and only a single scheme is required to handle all interactions in the virtual world. In order to have more dynamic interactions, we also created a new and efficient way for human users to dynamically interact within virtual worlds through their avatar. In particular, we show how inverse kinematics can be used to increase the interaction possibilities and realism in collaborative virtual environments. This results in a higher feeling of presence for connected users and allows for easy, on-the-fly creation of new interactions. For the distribution of both the interactive objects and the dynamic avatar interactions, we keep the network load as low as possible. To demonstrate the effectiveness of our techniques, we incorporate them into an existing CVE framework.  相似文献   

20.
Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号