首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 459 毫秒
1.
Wireless sensor networks (WSNs) play an important role in forest fire risk monitoring. Various applications are in operation. However, the use of mobile sensors in forest risk monitoring remains largely unexplored. Our research contributes to fill this gap by designing a model which abstracts mobility constraints within different types of contexts for the inference of mobile sensor behaviour. This behaviour is focused on achieving a suitable spatial coverage of the WSN when monitoring forest fire risk. The proposed mobility constraint model makes use of a Bayesian network approach and consists of three components: (1) a context typology describing different contexts in which a WSN monitors a dynamic phenomenon; (2) a context graph encoding probabilistic dependencies among variables of interest; and (3) contextual rules encoding expert knowledge and application requirements needed for the inference of sensor behaviour. As an illustration, the model is used to simulate the behaviour of a mobile WSN to obtain a suitable spatial coverage in low and high fire risk scenarios. It is shown that the implemented Bayesian network within the mobility constraint model can successfully infer behaviour such as sleeping sensors, moving sensors, or deploying more sensors to enhance spatial coverage. Furthermore, the mobility constraint model contributes towards mobile sensing in which the mobile sensor behaviour is driven by constraints on the state of the phenomenon and the sensing system.  相似文献   

2.
In this work we propose algorithms to learn the locations of static occlusions and reason about both static and dynamic occlusion scenarios in multi-camera scenes for 3D surveillance (e.g., reconstruction, tracking). We will show that this leads to a computer system which is able to more effectively track (follow) objects in video when they are obstructed from some of the views. Because of the nature of the application area, our algorithm will be under the constraints of using few cameras (no more than 3) that are configured wide-baseline. Our algorithm consists of a learning phase, where a 3D probabilistic model of occlusions is estimated per-voxel, per-view over time via an iterative framework. In this framework, at each frame the visual hull of each foreground object (person) is computed via a Markov Random Field that integrates the occlusion model. The model is then updated at each frame using this solution, providing an iterative process that can accurately estimate the occlusion model over time and overcome the few-camera constraint. We demonstrate the application of such a model to a number of areas, including visual hull reconstruction, the reconstruction of the occluding structures themselves, and 3D tracking.  相似文献   

3.
A genetic algorithm for searching spatial configurations   总被引:1,自引:0,他引:1  
Searching spatial configurations is a particular case of maximal constraint satisfaction problems, where constraints expressed by spatial and nonspatial properties guide the search process. In the spatial domain, binary spatial relations are typically used for specifying constraints while searching spatial configurations. Searching configurations is particularly intractable when configurations are derived from a combination of objects, which involves a hard combinatorial problem. This paper presents a genetic algorithm (GA) that combines a direct and an indirect approach to treating binary constraints in genetic operators. A new genetic operator combines randomness and heuristics for guiding the reproduction of new individuals in a population. Individuals are composed of spatial objects whose relationships are indexed by a content measure. This paper describes the GA and presents experimental results that compare the genetic versus a deterministic and a local-search algorithm. These experiments show the convenience of using a GA when the complexity of the queries and databases do no guarantee the tractability of a deterministic strategy.  相似文献   

4.
无线传感器网络结点配置是传感器网络研究的核心问题之一,它反映出无线传感器网络的代价和探测能力.主要研究了基于一种更为实用化概率检测模型(引入x%-RS 的概念)的无线传感器网络覆盖优化配置问题.在严格确保无线传感器网络连通性的条件下,优化了传感器结点配置数目并达到要求的覆盖度,获得具体的传感器结点配置方案.为提高算法的效率,在分步优化算法的基础上尝试一次循环配置多个传感器结点.最后,通过模拟计算给出配置算法的性能.  相似文献   

5.
This article presents an optimized sensor planning system for active visual inspection of three‐dimensional manufacturing computer‐aided design (CAD) models. Quantization errors and displacement errors are inevitable in active visual inspection. To obtain high accuracy for dimensioning the entities of three‐dimensional CAD models, minimization of these errors is essential. Spatial quantization errors result in digitization. The errors are serious when the size of the pixel is significant compared to the allowable tolerance in the object dimension on the image. In placing the active sensor to perform inspection, displacement of the sensors in orientation and location is common. The difference between observed dimensions obtained by the displaced sensor and the actual dimensions is defined as displacement errors. The density functions of quantization errors and displacement errors depend on camera resolution and camera locations and orientations. The sensor constraints, such as resolution, focus, field‐of‐view, and visibility constraints, restrict sensor placement. To obtain a satisfactory view of the targeted entities of the CAD models, these constraints have to be satisfied. In this article, we focus on the edge line segments as the inspected entities. We use a genetic algorithm to minimize the probabilistic magnitude of the errors subject to the sensor constraints. Since the objective functions and constraint functions are both complicated and nonlinear, traditional nonlinear programming may not be efficient and it may trap at a local minimum. Using crossover operations, mutation operations, and the stochastic selection in the genetic algorithm, trapping can be avoided. Experiments are conducted and the performance of the genetic algorithm is presented. Given the CAD model and the entities to be inspected, the active visual inspection planning system obtains the sensor setting that maximizes the probabilities of a required accuracy for each entity. © 2001 John Wiley & Sons, Inc.  相似文献   

6.
Constraint-based sensor planning for scene modeling   总被引:3,自引:0,他引:3  
We describe an automated scene modeling system that consists of two components operating in an interleaved fashion: an incremental modeler that builds solid models from range imagery; and a sensor planner that analyzes the resulting model and computes the next sensor position. This planning component is target-driven and computes sensor positions using model information about the imaged surfaces and the unexplored space in a scene. The method is shape-independent and uses a continuous-space representation that preserves the accuracy of sensed data. It is able to completely acquire a scene by repeatedly planning sensor positions, utilizing a partial model to determine volumes of visibility for contiguous areas of unexplored scene. These visibility volumes are combined with sensor placement constraints to compute sets of occlusion-free sensor positions that are guaranteed to improve the quality of the model. We show results for the acquisition of a scene that includes multiple, distinct objects with high occlusion  相似文献   

7.
To develop a Multi-Sensor System (MSS) one has to consider the following three important issues: (1) modeling the uncertainty that exists in the sensory measurements, (2) modeling the cooperation behavior among the sensors, and, finally, (3) developing fusion strategies that recognize both the uncertainty model and the cooperation behavior. In this article we propose a probabilistic approach for modeling the uncertainty and cooperation in sensory teams. We show how the Information Variation measure can be used to capture both the quality of sensory data and the interdependence relationships that might exist between the different sensors. This allows the sensor fusion procedures to avoid the assumption that the observations made by the different sensors are totally independent, an assumption that lessens the applicability of such procedures in many practical situations. We also show how DeGroot's Consensus model can be combined with the Information Variation model to fuse the uncertain sensory data. The proposed approach develops to an approximation of the Bayesian paradigm when the team constitutes more than two sensors. It is the computational burden as well as the difficulty associated with the construction of the exact Bayesian paradigm that motivated the development of this approach. © 1996 John Wiley & Sons, Inc.  相似文献   

8.
《Advanced Robotics》2013,27(7):771-792
We introduced the concept of C-space entropy recently as a measure of knowledge of configuration space (C-space) for sensor-based exploration and path planning for general robot–sensor systems. The robot plans the next sensing action to maximally reduce the expected C-space entropy, also called the Maximal expected Entropy Reduction (MER) criterion. The resulting view planning algorithms showed significant improvement of exploration rate over physical space-based criteria. However, this expected C-space entropy computation made two idealized assumptions: (i) that the sensor field of view (FOV) is a point and (ii) that there are no occlusion (or visibility) constraints, i.e., as if the sensor can sense through the obstacles. We extend the expected C-space entropy formulation where these two assumptions are relaxed, and consider a range sensor with non-zero volume FOV and occlusion constraints, thereby modeling a realistic range sensor. Planar simulations and experimental results on the SFU Eye-in-Hand system show that the new formulation results in further improvement in C-space exploration efficiency over the point FOV sensor-based MER formulation.  相似文献   

9.
Non-photorealistic (illustrative) rendering augments typical rendering models to selectively emphasize or de-emphasize specific structures of rendered objects. Illustrative techniques may affect not only the rendering style of specific portions of an object but also their visibility, ensuring that less important regions do not occlude more important ones. Cutaway views completely remove occluding, unimportant structures—possibly also removing valuable context information—while existing solutions for smooth reduction of occlusion based on importance lack precise visibility control, simplicity and generality. We introduce a new front-to-back fragment composition equation that directly takes into account a measure of sample importance and allows smooth and precise importance-based visibility control. We demonstrate the generality of our composition equation with several illustrative effects, which were obtained by using a set of importance measures calculated on the fly or defined by the user. The presented composition method is suitable for direct volume rendering as well as rendering of layered 3D models. We discuss both cases and show examples, though focusing mainly on illustration of volumetric data.  相似文献   

10.
In this article, we present a camera control method in which the selection of an optimal camera position and the modification of camera configurations are accomplished according to changes in the surroundings. For the autonomous selection and modification of camera configurations during tasks, we consider the camera's visibility and the manipulator's manipulability. The visibility constraint guarantees that the whole of a target object can be “viewed” with no occlusions by the surroundings, and the manipulability constraint guarantees avoidance of the singular position of the manipulator and rapid modification of the camera position. By considering visibility and manipulability constraints simultaneously, we determine the optimal camera position and modify the camera configuration such that visual information for the target object can be obtained continuously during tasks. The results of simulations and experiments show that the active camera system with an eye‐in‐hand configuration can modify its configuration autonomously according to the motion of the surroundings by applying the proposed camera control method. © 2002 Wiley Periodicals, Inc.  相似文献   

11.
An occlusion metric for selecting robust camera configurations   总被引:2,自引:0,他引:2  
Vision based tracking systems for surveillance and motion capture rely on a set of cameras to sense the environment. The exact placement or configuration of these cameras can have a profound affect on the quality of tracking which is achievable. Although several factors contribute, occlusion due to moving objects within the scene itself is often the dominant source of tracking error. This work introduces a configuration quality metric based on the likelihood of dynamic occlusion. Since the exact geometry of occluders can not be known a priori, we use a probabilistic model of occlusion. This model is extensively evaluated experimentally using hundreds of different camera configurations and found to correlate very closely with the actual probability of feature occlusion. Authors X. Chen and J. Davis were in Computer Graphics Lab at Stanford University at time of research.  相似文献   

12.
Automatic sensor placement from vision task requirements   总被引:4,自引:0,他引:4  
The problem of automatically generating the possible camera locations for observing an object is defined, and an approach to its solution is presented. The approach, which uses models of the object and the camera, is based on meeting the requirements that: the spatial resolution be above a minimum value, all surface points be in focus, all surfaces lie within the sensor field of view and no surface points be occluded. The approach converts each sensing requirement into a geometric constraint on the sensor location, from which the three-dimensional region of viewpoints that satisfies that constraint is computed. The intersection of these regions is the space where a sensor may be located. The extension of this approach to laser-scanner range sensors is also described. Examples illustrate the resolution, focus, and field-of-view constraints for two vision tasks  相似文献   

13.
使用视差一致性约束的立体匹配与遮掩检测   总被引:1,自引:0,他引:1  
在视差空间内分析了立体视觉中的遮掩与匹配问题后,针对已有的双目立体匹配约束规则的缺陷,提出了基于视差空间左右视线冲突检测的视差一致性约束的概念。这种约束的优点在于其正确性与完整性,能适应更多的复杂场景而不至于失效。同时提出了一个利用一致性约束的匹配与遮掩检测算法。实验的结果表明,由于使用快速相关匹配值计算,其运算速度有了极大提高,更重要的是由于视差一致性约束的使用,使得即使出现复杂场景也能得到很好的结果。  相似文献   

14.
We present a system for constructing 3D models of real-world objects with optically challenging surfaces. The system utilizes a new range imaging concept called multi-peak range imaging, which stores multiple candidates of range measurements for each point on the object surface. The multiple measurements include the erroneous range data caused by various surface properties that are not ideal for structured-light range sensing. False measurements generated by spurious reflections are eliminated by applying a series of constraint tests. The constraint tests based on local surface and local sensor visibility are applied first to individual range images. The constraint tests based on global consistency of coordinates and visibility are then applied to all range images acquired from different viewpoints. We show the effectiveness of our method by constructing 3D models of five different optically challenging objects. To evaluate the performance of the constraint tests and to examine the effects of the parameters used in the constraint tests, we acquired the ground truth data by painting those objects to suppress the surface-related properties that cause difficulties in range sensing. Experimental results indicate that our method significantly improves upon the traditional methods for constructing reliable 3D models of optically challenging objects.  相似文献   

15.
This article presents a new method to solve a dynamic sensor fusion problem. We consider a large number of remote sensors which measure a common Gauss–Markov process. Each sensor encodes and transmits its measurement to a data fusion center through a resource restricted communication network. The communication cost incurred by a given sensor is quantified as the expected bitrate from the sensor to the fusion center. We propose an approach that attempts to minimize a weighted sum of these communication costs subject to a constraint on the state estimation error at the fusion center. We formulate the problem as a difference-of-convex program and apply the convex-concave procedure (CCP) to obtain a heuristic solution. We consider a 1D heat transfer model and a model for 2D target tracking by a drone swarm for numerical studies. Through these simulations, we observe that our proposed approach has a tendency to assign zero data rate to unnecessary sensors indicating that our approach is sparsity-promoting, and an effective sensor selection heuristic.  相似文献   

16.
This article introduces a sensor placement measure called vision resolvability. The measure provides a technique for estimating the relative ability of various visual sensors, including monocular systems, stereo pairs, multi-baseline stereo systems, and 3D rangefinders, to accurately control visually manipulated objects. The resolvability ellipsoid illustrates the directional nature of resolvability, and can be used to direct camera motion and adjust camera intrinsic parameters in real-time so that the servoing accuracy of the visual servoing system improves with camera-lens motion. The Jacobian mapping from task space to sensor space is derived for a monocular system, a stereo pair with parallel optical axes, and a stereo pair with perpendicular optical axes. Resolvability ellipsoids based on these mappings for various sensor configurations are presented. Visual servoing experiments demonstrate that vision resolvability can be used to direct camera-lens motion to increase the ability of a visually servoed manipulator to precisely servo objects. © 1996 John Wiley & Sons, Inc.  相似文献   

17.
In this work, we present a new approach to distributed sensor data fusion (SDF) systems in multitarget tracking, called TSDF (Tessellated SDF), centered around a geographical partitioning (tessellation) of the data. A functional decomposition divides SDF into components that can be assigned to processing units, parallelizing the processing. The tessellation implicitly defines the set of tracks potentially yielding correlations with the sensor plots (observations) in a tile. Some tracks may occur as correlation candidates for multiple tiles. Conflicts caused by correlations of such tracks with plots in different tiles, are resolved by combining all involved tracks and plots into independent data association problems. The benefit of the TSDF approach to a clustering-based process distribution is independence of the problem space, which yields better scalability and manageability characteristics. The TSDF approach allows scaling in more than one way. It allows SDF for single sensor, multiple sensors on a single platform, and even for multiple sensors on multiple platforms. It also provides the flexibility to scale the processing to the size of the problem. This enables a better control of the throughput, to meet various timing constraints.  相似文献   

18.
Automatic Sensor Placement for Accurate Dimensional Inspection   总被引:1,自引:0,他引:1  
Deriving accurate 3D object dimensions with a passive vision system demands, in general, the use of multistation sensor configurations. In such configurations, object features appear in images from multiple viewpoints, facilitating their measurement by means of optical triangulation. Previous efforts toward automatic sensor placement have been restricted to single sensor station solutions. In this paper we review photogrammetric expertise in the design of multistation configurations, including the bundle method—a general mathematical model for optical triangulation—and fundamental considerations and constraints influencing the placement of sensor stations. An overview of CONSENS, an expert system-based software tool which exploits these considerations and constraints in automatically designing multistation configurations, is given. Examples of multistation configurations designed by CONSENS demonstrate the tool's capabilities and the potential of our approach for automating sensor placement for inspection tasks.  相似文献   

19.
20.
This study considers a wireless sensor network (WSN) designed to track specified objects of interest such as bird-calls, insect-images, and so forth. An assumption is made that the sensors in the WSN are capable of analyzing and identifying detected objects and are pre-loaded with the features of the tracked objects before they are deployed. The features associated with the tracked objects are referred to as “model tuples”. When a sensor subsequently detects an object, it extract features from the detected object and then compares it with the tuples stored in its memory in order to determine whether or not the detected object is the tracked object. Since the sensors have only limited memory and storage space, it is impossible to store all the tuples on a single sensor. Furthermore, the sensors are battery operated, and thus the stored tuples are irretrievably lost once the sensor’s energy resources have been consumed. As a result, the network no longer has a complete knowledge of all the tracked information. Accordingly, the present study proposes four tuple dispatching schemes for distributing the tracked information amongst the sensors in such a way as to mitigate the effects of sensor energy depletion, namely sequential dispatching, sequential dispatching with overlap, fixed distance dispatching, and balanced incomplete block dispatching. In addition, an efficient diversity-driven selective forwarding scheme is proposed to resolve the problem where the detected object fails to match the tuples held at the local sensor. In the approach, the local sensor applies the correlation between the sensor identifier and the indexes of the tuples stored at the various sensors to deliver the feature of the object along the paths with the highest diversity. The simulation presents a series of experimental results to benchmark the performance of the proposed forwarding approach for each of the dispatching schemes against that of a blind flooding approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号