Multi-valued and universal binary neurons (MVN and UBN) are the neural processing elements with the complex-valued weights and high functionality. It is possible to implement an arbitrary mapping described by partially defined multiple-valued function on the single MVN. An arbitrary mapping described by partially defined or fully defined Boolean function, which can be non-threshold, may be implemented on the single UBN. The quickly converging learning algorithms exist for both types of neurons. Such features of the MVN and UBN may be used for solving the different problems. One of the most successful applications of the MVN and UBN is their usage as basic neurons in the Cellular Neural Networks (CNN). It opens the new effective opportunities in nonlinear image filtering and its applications to noise reduction, edge detection and solving of the super resolution problem. A number of experimental results are presented to illustrate the performance of the proposed algorithms.An erratum to this article can be found at 相似文献
A theory is presented that explains how the visual system infers the lightness, opacity, and depth of surfaces from stereoscopic images. It is shown that the polarity and magnitude of image contrast play distinct roles in surface perception, which can be captured by 2 principles of perceptual inference. First, a contrast depth asymmetry principle articulates how the visual system computes the ordinal depth and lightness relationships from the polarity of local, binocularly matched image contrast. Second, a global transmittance anchoring principle expresses how variations in contrast magnitudes are used to infer the presence of transparent surfaces. It is argued that these principles provide a unified explanation of how the visual system computes the 3-D surface structure of opaque and transparent surfaces. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
In this paper we suggest a new statistical method of correcting the results of hot-line experiments for the effects of background sources and we use the new method to reassess the adequacy of three probability distributions proposed in the literature for image spread from line sources. The data are from sources labelled with 125I in semi-thin resin sections 0·4-0·8 μm in thickness. The new method reveals that two of the models describe the empirical distributions more closely than earlier analysis had suggested, and it confirms an increasing relationship between half distance of image spread and the thickness of the source. However, it also confirms that considerable ‘inter hot-line’ experimental variation remains, even after background correction. This suggests that multiple experiments are needed to produce reliable estimates of half distance. 相似文献
GENIUS-TF (Nucl. Instr. and Meth. A 511 (2003) 341; Nucl. Instr. and Meth. A 481 (2002) 149.) is a test-facility for the GENIUS project (GENIUS-Proposal, 20 November 1997; Z. Phys. A 359 (1997) 351; CERN Courier, November 1997, 16; J. Phys. G 24 (1998) 483; Z. Phys. A 359 (1997) 361; in: H.V. Klapdor-Kleingrothaus, H. Pas. (Eds.), First International Conference on Particle Physics Beyond the Standard Model, Castle Ringberg, Germany, 8–14 June 1997, IOP Bristol (1998) 485 and in Int. J. Mod. Phys. A 13 (1998) 3953; in: H.V. Klapdor-Kleingrothaus, I.V. Krivosheina (Eds.), Proceedings of the Second International Conference on Particle Physics Beyond the Standard Model BEYOND’ 99, Castle Ringberg, Germany 6–12 June 1999, IOP Bristol (2000) 915), a proposed large scale underground observatory for rare events which is based on operation of naked germanium detectors in liquid nitrogen for an extreme background reduction. Operation of naked Ge crystals in liquid nitrogen has been applied routinely already for more than 20 years by the CANBERRA Company for technical functions tests (CANBERRA Company, private communication, 5 March 2004.), but it never had found entrance into basic research. Only in 1997 first tests of application of this method for nuclear spectroscopy have been performed, successfully, in Heidelberg (Klapdor-Kleingrothaus et al., 1997, 1998; J. Hellmig and H.V. Klapdor-Kleingrothaus, 1997).
On May 5, 2003 the first four naked high-purity germanium detectors (total mass 10.52 kg) were installed in liquid nitrogen in the GENIUS Test Facility at the Gran Sasso underground laboratory. Since then the experiment has been running continuously, testing for the first time the novel technique in an underground laboratory and for a long-lasting period.
In this work, we present the first analysis of the GENIUS-TF background after the completion of the external shielding, which took place in December 2003. We focus especially on the background coming from 222Rn daughters. This is found to be at present by a factor of 200 higher than expected from simulation. It is still compatible with the scientific goal of GENIUS-TF, namely to search for cold dark matter by the modulation signal, but on the present level would cause serious problems for a full GENIUS—like experiment using liquid nitrogen. 相似文献
Corner detection is a low-level feature detection operator that is of great use in image processing applications, for example, optical flow and structure from motion by image correspondence. The detection of corners is a computationally intensive operation. Past implementations of corner detection techniques have been restricted to software. In this paper we propose an efficient very large-scale integration (VLSI) architecture for detection of corners in images. The corner detection technique is based on the half-edge concept and the first directional derivative of Gaussian. Apart from the location of the corner points, the algorithm also computes the corner orientation and the corner angle and outputs the edge map of the image. The symmetrical properties of the masks are utilized to reduce the number of convolutions effectively, from eight to two. Therefore, the number of multiplications required per pixel is reduced from 1800 to 392. Thus, the proposed architecture yields a speed-up factor of 4.6 over conventional convolution architectures. The architecture uses the principles of pipelining and parallelism and can be implemented in VLSI. 相似文献
An efficient algorithm for the random packing of spheres can significantly save the cost of the preparation of an initial configuration often required in discrete element simulations. It is not trivial to generate such random packing at a large scale, particularly when spheres of various sizes and geometric domains of different shapes are present. Motivated by the idea of compression complemented by an efficient physical process to increase packing density, shaking, a new approach, termed compression algorithm, is proposed in this work to randomly fill any arbitrary polyhedral or cylindrical domains with spheres of various sizes. The algorithm features both simplicity and high efficiency. Tests show that it takes 181 s on a 1.4-GHz PC to complete the filling of a cylindrical domain with a total number of 26,787 spheres, achieving a packing density of 52.89%. 相似文献