Although solid models play a central role in modern CAD systems, 2D CAD systems are still commonly used for designing products without complex curved faces. Therefore, an important task is to convert 2D drawings to solid models, and this is usually carried out manually even in present CAD systems. Many methods have been proposed to automatically convert orthographic part drawings of solid objects to solid models. Unfortunately, products are usually drawn as 2D assembly drawings, and therefore, these methods cannot be applied. A further problem is the difficult and time-consuming task of decomposing 2D assembly drawings into 2D part drawings. In previous work, the authors proposed a method to automatically decompose 2D assembly drawings into 3D part drawings, from which 2D part drawings can be easily generated. However, one problem with the proposed method was that the number of solutions could easily explode if the 2D assembly drawings became complex. Building on this work, here we describe a new method to automatically convert 2D assembly drawings to 3D part drawings, generating a unique solution for designers regardless of the complexity of the original 2D assembly drawings. The only requirement for the approach is that the assembly drawings consist of standard parts such as bars and plates. In 2D assembly drawings, the dimensions, part numbers and parts lists are usually drawn, and the proposed method utilizes these to obtain a unique solution. 相似文献
Several recent papers have adapted notions of geometric topology to the emerging field of digital topology. An important notion is that of digital homotopy. In this paper, we study a variety of digitally-continuous functions that preserve homotopy types or homotopy-related properties such as the digital fundamental group.Laurence Boxer is Professor of Computer and Information Sciences at Niagara University, and Research Professor of Computer Science and Engineering at the State University of New York at Buffalo. He received his Ph.D. in Mathematics from the University of Illinois at Urbana-Champaign. His research interests are computational geometry, parallel algorithms, and digital topology. Dr. Boxer is co-author, with Russ Miller, of Algorithms Sequential and Parallel, A Unified Approach, a recent textbook published by Prentice Hall. 相似文献
Wooded hedgerows do not cover large areas but perform many functions that are beneficial to water quality and biodiversity. A broad range of remotely sensed data is available to map these small linear elements in rural landscapes, but only a few of them have been evaluated for this purpose. In this study, we evaluate and compare various optical remote-sensing data including high and very high spatial resolution, active and passive, and airborne and satellite data to produce quantitative information on the hedgerow network structure and to analyse qualitative information from the maps produced in order to estimate the true value of these maps. We used an object-based image analysis that proved to be efficient for detecting and mapping thin elements in complex landscapes. The analysis was performed at two scales, the hedgerow network scale and the tree canopy scale, on a study site that shows a strong landscape gradient of wooded hedgerow density. The results (1) highlight the key role of spectral resolution on the detection and mapping of wooded elements with remotely sensed data; (2) underline the fact that every satellite image provides relevant information on wooded network structures, even in closed landscape units, whatever the spatial resolution; and (3) indicate that light detection and ranging data offer important insights into future strategies for monitoring hedgerows. 相似文献
Virtualization is a pillar technology in cloud computing for multiplexing computing resources on a single cloud platform for multiple cloud tenants. Monitoring the behavior of virtual machines (VMs) on a cloud platform is a critical requirement for cloud tenants. Existing monitoring mechanisms on virtualized platforms either takes a complete VM as the monitoring granularity, such that they cannot capture the malicious behaviors within individual VMs, or they focus on specific monitoring functions that cannot be used for heterogeneous VMs concurrently running on a single cloud node. Furthermore, the existing monitoring mechanisms have made an assumption that the privileged domain is trusted to act as expected, which causes the cloud tenants’ concern about security because the privileged domain in fact could not act as the tenants’ expectation. We design a trusted monitoring framework, which provides a chain of trust that excludes the untrusted privileged domain, by deploying an independent guest domain for the monitoring purpose, as well as utilizing the trusted computing technology to ensure the integrity of the monitoring environment. Moreover, the feature of fine-grained and general monitoring is also provided. We have implemented the proposed monitoring framework on Xen, and integrated it into OpenNebula. Our experimental results show that it can offer expected functionality, and bring moderate performance overhead. 相似文献
This study uses a hostage negotiation setting to demonstrate how a team of strategic police officers can utilize specific coping strategies to minimize uncertainty at different stages of their decision-making in order to foster resilient decision-making to effectively manage a high-risk critical incident. The presented model extends the existing research on coping with uncertainty by (1) applying the RAWFS heuristic (Lipshitz and Strauss in Organ Behav Human Decis Process 69:149–163, 1997) of individual decision-making under uncertainty to a team critical incident decision-making domain; (2) testing the use of various coping strategies during “in situ” team decision-making by using a live simulated hostage negotiation exercise; and (3) including an additional coping strategy (“reflection-in-action”; Schön in The reflective practitioner: how professionals think in action. Temple Smith, London, 1983) that aids naturalistic team decision-making. The data for this study were derived from a videoed strategic command meeting held within a simulated live hostage training event; these video data were coded along three themes: (1) decision phase; (2) uncertainty management strategy; and (3) decision implemented or omitted. Results illustrate that, when assessing dynamic and high-risk situations, teams of police officers cope with uncertainty by relying on “reduction” strategies to seek additional information and iteratively update these assessments using “reflection-in-action” (Schön 1983) based on previous experience. They subsequently progress to a plan formulation phase and use “assumption-based reasoning” techniques in order to mentally simulate their intended courses of action (Klein et al. 2007), and identify a preferred formulated strategy through “weighing the pros and cons” of each option. In the unlikely event that uncertainty persists to the plan execution phase, it is managed by “reduction” in the form of relying on plans and standard operating procedures or by “forestalling” and intentionally deferring the decision while contingency planning for worst-case scenarios. 相似文献
With the advent of Next Generation Network (NGN), services that are currently provided by multiple specific network-centric
architectures. NGN provides AAA (Anytime, Anywhere and Always on) access to users from different service providers with consistent
and ubiquitous provision of services as necessary. This special issue of NGN includes pervasive, grid, and peer-to-peer computing
to provide computing and communication services at anytime and anywhere. In fact, the application of NGN includes digital
image processing, multimedia systems/services, and so on. Here we focus on the digital image processing technology in NGN
environments. Low-contrast structure and heavy noise in NGN environments can be found in many kinds of digital images, which
makes the images vague and uncertainly, especially in x-ray images. As result, some useful tiny characteristic are weakened—which
are difficult to distinguish even by naked eyes. Based on the combination of no-linear grad-contrast operator and multi-resolution
wavelet analysis, a kind of image enhancement processing algorithm for useful tiny characters is presented. The algorithm
can enhance the tiny characters while confine amplifying noise. The analysis of the results shows that local regions of the
image are enhanced by using the concept of the grad contrast to make image clearer adaptively. Experiments were conducted
on real pictures, and the results show that the algorithm is flexible and convenient. 相似文献
This study strives to establish an objective basis for image compositing in satellite oceanography. Image compositing is a powerful technique for cloud filtering that often emphasizes cloud clearing at the expense of obtaining synoptic coverage. Although incomplete cloud removal in image compositing is readily apparent, the loss of synopticity, often, is not. Consequently, the primary goal of image compositing should be to obtain the greatest amount of cloud-free coverage or clarity in a period short enough that synopticity, to a significant degree, is preserved.To illustrate the process of image compositing and the problems associated with it, we selected a region off the coast of California and constructed two 16-day image composites, one, during the spring, and the second, during the summer of 2006, using Advanced Very High Resolution Radiometer (AVHRR) InfraRed (IR) satellite imagery. Based on the results of cloud clearing for these two 16-day sequences, rapid cloud clearing occurred up to day 4 or 5, followed by much slower cloud clearing out to day 16, suggesting an explicit basis for the growth in cloud clearing. By day 16, the cloud clearing had, in most cases, exceeded 95%. Based on these results, a shorter compositing period could have been employed without a significant loss in clarity.A method for establishing an objective basis for selecting the period for image compositing is illustrated using observed data. The loss in synopticity, which, in principle, could be estimated from pattern correlations between the images in the composite, was estimated from a separate time series of SST since the loss of synopticity, in our approach, is only a function of time. The autocorrelation function of the detrended residuals provided the decorrelation time scale and the basis for the decay process, which, together, define the loss of synopticity. The results show that (1) the loss of synopticity and the gain in clarity are inversely related, (2) an objective basis for selecting a compositing period corresponds to the day number where the decay and growth curves for synopticity and clarity intersect, and (3), in this case, the point of intersection occurred 3.2 days into the compositing period. By applying simple mathematics it was shown that the intersection time for the loss in synopticity and the growth in clarity is directly proportional to the initial conditions required to specify the clarity at the beginning of the compositing period, and inversely proportional to the sum of the rates of growth for clarity and the loss in synopticity. Finally, we consider these results to be preliminary in nature, and, as a result, hope that future work will bring forth significant improvements in the approach outlined in this study. 相似文献
Spatial regularity amidst a seemingly chaotic image is often meaningful. Many papers in computational geometry are concerned with detecting some type of regularity via exact solutions to problems in geometric pattern recognition. However, real-world applications often have data that is approximate, and may rely on calculations that are approximate. Thus, it is useful to develop solutions that have an error tolerance.
A solution has recently been presented by Robins et al. [Inform. Process. Lett. 69 (1999) 189–195] to the problem of finding all maximal subsets of an input set in the Euclidean plane
that are approximately equally-spaced and approximately collinear. This is a problem that arises in computer vision, military applications, and other areas. The algorithm of Robins et al. is different in several important respects from the optimal algorithm given by Kahng and Robins [Patter Recognition Lett. 12 (1991) 757–764] for the exact version of the problem. The algorithm of Robins et al. seems inherently sequential and runs in O(n5/2) time, where n is the size of the input set. In this paper, we give parallel solutions to this problem. 相似文献