This study strives to establish an objective basis for image compositing in satellite oceanography. Image compositing is a powerful technique for cloud filtering that often emphasizes cloud clearing at the expense of obtaining synoptic coverage. Although incomplete cloud removal in image compositing is readily apparent, the loss of synopticity, often, is not. Consequently, the primary goal of image compositing should be to obtain the greatest amount of cloud-free coverage or clarity in a period short enough that synopticity, to a significant degree, is preserved.To illustrate the process of image compositing and the problems associated with it, we selected a region off the coast of California and constructed two 16-day image composites, one, during the spring, and the second, during the summer of 2006, using Advanced Very High Resolution Radiometer (AVHRR) InfraRed (IR) satellite imagery. Based on the results of cloud clearing for these two 16-day sequences, rapid cloud clearing occurred up to day 4 or 5, followed by much slower cloud clearing out to day 16, suggesting an explicit basis for the growth in cloud clearing. By day 16, the cloud clearing had, in most cases, exceeded 95%. Based on these results, a shorter compositing period could have been employed without a significant loss in clarity.A method for establishing an objective basis for selecting the period for image compositing is illustrated using observed data. The loss in synopticity, which, in principle, could be estimated from pattern correlations between the images in the composite, was estimated from a separate time series of SST since the loss of synopticity, in our approach, is only a function of time. The autocorrelation function of the detrended residuals provided the decorrelation time scale and the basis for the decay process, which, together, define the loss of synopticity. The results show that (1) the loss of synopticity and the gain in clarity are inversely related, (2) an objective basis for selecting a compositing period corresponds to the day number where the decay and growth curves for synopticity and clarity intersect, and (3), in this case, the point of intersection occurred 3.2 days into the compositing period. By applying simple mathematics it was shown that the intersection time for the loss in synopticity and the growth in clarity is directly proportional to the initial conditions required to specify the clarity at the beginning of the compositing period, and inversely proportional to the sum of the rates of growth for clarity and the loss in synopticity. Finally, we consider these results to be preliminary in nature, and, as a result, hope that future work will bring forth significant improvements in the approach outlined in this study. 相似文献
Spatial regularity amidst a seemingly chaotic image is often meaningful. Many papers in computational geometry are concerned with detecting some type of regularity via exact solutions to problems in geometric pattern recognition. However, real-world applications often have data that is approximate, and may rely on calculations that are approximate. Thus, it is useful to develop solutions that have an error tolerance.
A solution has recently been presented by Robins et al. [Inform. Process. Lett. 69 (1999) 189–195] to the problem of finding all maximal subsets of an input set in the Euclidean plane
that are approximately equally-spaced and approximately collinear. This is a problem that arises in computer vision, military applications, and other areas. The algorithm of Robins et al. is different in several important respects from the optimal algorithm given by Kahng and Robins [Patter Recognition Lett. 12 (1991) 757–764] for the exact version of the problem. The algorithm of Robins et al. seems inherently sequential and runs in O(n5/2) time, where n is the size of the input set. In this paper, we give parallel solutions to this problem. 相似文献
Information security management has become an important research issue in distributed systems, and the detection of failures
is a fundamental issue for fault tolerance in large distributed systems. Recently, many people have come to realize that failure
detection ought to be provided as some form of generic service, similar to IP address lookup. However, this has not been successful
so far; one of the reasons being the fact that classical failure detectors were not designed to satisfy several application
requirements simultaneously. More specifically, traditional implementations of failure detectors are often tuned for running
over local networks and fail to address some important problems found in wide-area distributed systems with a large number
of monitored components, such as Grid systems. In this paper, we study the security management scheme for failure detector
distributed systems. We first identify some of the most important QoS problems raised in the context of large wide-area distributed
systems. Then we present a novel failure detector scheme combined with self-tuning control theory that can help in solving
or optimizing some of these problems. Furthermore, this paper discusses the design and analysis of implementing a scalable
failure detection service for such large wide-area distributed systems considering dynamically adjusting the heartbeat streams,
so that it satisfies the bottleneck router requirements. The basic z-transformation stability test is used to achieve the stability criterion, which ensures the bounded rate allocation without
steady state oscillation. We further show how the online failure detector control algorithm can be used to design a controller,
analyze the theoretical aspects of the proposed algorithm and verify its agreement with the simulations in the LAN and WAN
case. Simulation results show the efficiency of our scheme in terms of high utilization of the bottleneck link, fast response
and good stability of the bottleneck router buffer occupancy as well as of the controlled sending rates. In conclusion, the
new security management failure detector algorithm provides a better QoS than an algorithm that is proposed by Stelling et al.
(Proceedings of 7th IEEE symposium on high performance distributed computing, pp. 268–278, 1998), Foster et al. (Int J Supercomput
Appl, 2001). 相似文献
This interactive system for garment creation determines a garment's shape and how the character wears it based on a user-drawn sketch. The system then uses distances between the 2D garment silhouette and the character model to infer remaining distance variations in 3D 相似文献
In a competitive business environment, the textile industrialists intend to propose diversified products according to consumers
preference. For this purpose, the integration of sensory attributes in the process parameters choice seems to be a useful
alternative. This paper provides fuzzy and neural models for the prediction of sensory properties from production parameters
of knitted fabrics. The prediction accuracy of these models was evaluated using both the root mean square error (RMSE) and
mean relative percent error (MRPE). The results revealed the models ability to predict tactile sensory attributes based on
the production parameters. The comparison of the prediction performances showed that the neural models are slightly powerful
than the fuzzy models. 相似文献
This paper studies parallel training of an improved neural network for text categorization. With the explosive growth on the amount of digital information available on the Internet, text categorization problem has become more and more important, especially when millions of mobile devices are now connecting to the Internet. Improved back-propagation neural network (IBPNN) is an efficient approach for classification problems which overcomes the limitations of traditional BPNN. In this paper, we utilize parallel computing to speedup the neural network training process of IBPNN. The parallel IBNPP algorithm for text categorization is implemented on a Sun Cluster with 34 nodes (processors). The communication time and speedup for the parallel IBPNN versus various number of nodes are studied. Experiments are conducted on various data sets and the results show that the parallel IBPNN together with SVD technique achieves fast computational speed and high text categorization correctness. 相似文献
As a fundamental component in wireless networks, location management consists of two operations: location update and paging. These two supplementary operations enable the mobile user ubiquitous mobility. However, in case of failed location update, a significant consequence is the obsolete location identity in the network databases and thereafter the incapability in establishing the valid route for the potential call connection, which will seriously degrade the network quality-of-service (QoS). This issue is not theoretically studied in the literature. In this paper, we perform a quantitative analysis of the location management effect on QoS in the wireless networks. The metrics call blocking probability and the average number of blocked calls are introduced to reflect the QoS. For the sake of general applicability, the performance metrics are formulated with the relaxed tele-traffic parameters. Namely, the call inter-arrival time, cell residence time, location area residence time and location update inter-arrival time follow a general probability density function. The formulae are additionally specified in the static and several dynamic location management mechanisms. Numerical examples are presented to show the interaction between the performance metrics and location management schemes. The discussions on the sensitivity of tele-parameters are also given. 相似文献