首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
Theory for random arrays predicts a mean sidelobe level given by the inverse of the number of elements. In practice, however, the sidelobe level fluctuates much around this mean. In this paper two optimization methods for thinned arrays are given: one is for optimizing the weights of each element, and the other one optimizes both the layout and the weights. The weight optimization algorithm is based on linear programming and minimizes the peak sidelobe level for a given beamwidth. It is used to investigate the conditions for finding thinned arrays with peak sidelobe level at or below the inverse of the number of elements. With optimization of the weights of a randomly thinned array, it is possible to come quite close and even below this value, especially for 1D arrays. Even for 2D sparse arrays a large reduction in peak sidelobe level is achieved. Even better solutions are found when the thinning pattern is optimized also. This requires an algorithm that uses mixed integer linear programming. In this case solutions, with lower peak sidelobe level than the inverse number of elements can be found both in the 1D and the 2D cases  相似文献   

2.
For some covering arrays, there are some wild card positions in the array where there is flexibility about what factor levels can be chosen with no impact on the basic properties of covering array, because all of the required pairs have been covered. The choice of how to fill these wild card positions can influence other properties, such as the degrees of orthogonality or three‐way coverage of the array. In this paper, some criteria are proposed for identifying the best choices for the wild card positions to create covering arrays with highly desirable properties. Accompanying graphical summaries are also described to highlight differing performance for several examples. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
The recent development of integrated circuit capacitor arrays and the growth of their applications have resulted in a need to perform precision testing as an aid to future design improvements. For reasons discussed in this paper, laboratory instruments such as capacitance bridges are not well-suited to this need. In order to test capacitor arrays accurately, a novel technique has been developed. It is based on a special algorithm in which the capacitor array is used as a precision voltage divider. A capacitor array tester consisting of both hardware and software has been built which executes this algorithm. This system has been used to perform measurements upon a large number (thousands) of NMOS and CMOS capacitor arrays. The standard deviation of this tester's measurement error is approximately 0.0009 percent of full scale (0.0088 LSB referenced to 10 bits). In contrast with manual testing with a capacitance bridge (requiring 10 min per array), the tester requires less than 5 s to fully test an array, mark the circuit and move to the next die position.  相似文献   

4.
This paper presents a (higher‐order) finite element approach for the simulation of heat diffusion and thermoelastic deformations in NC‐milling processes. The inherent continuous material removal in the process of the simulation is taken into account via continuous removal‐dependent refinements of a paraxial hexahedron base‐mesh covering a given workpiece. These refinements rely on isotropic bisections of these hexahedrons along with subdivisions of the latter into tetrahedrons and pyramids in correspondence to a milling surface triangulation obtained from the application of the marching cubes algorithm. The resulting mesh is used for an element‐wise defined characteristic function for the milling‐dependent workpiece within that paraxial hexahedron base‐mesh. Using this characteristic function, a (higher‐order) fictitious domain method is used to compute the heat diffusion and thermoelastic deformations, where the corresponding ansatz spaces are defined for some hexahedron‐based refinement of the base‐mesh. Numerical experiments compared to real physical experiments exhibit the applicability of the proposed approach to predict deviations of the milled workpiece from its designed shape because of thermoelastic deformations in the process.  相似文献   

5.
This paper describes an improved three-way alternating least-squares multivariate curve resolution algorithm that makes use of the recently introduced multi-dimensional arrays of MATLAB®. Multi-dimensional arrays allow for a convenient way to apply chemically sound constraints, such as closure, in the third dimension. The program is designed for kinetic studies on liquid chromatography with diode array detection but can be used for other three-way data analysis. The program is tested with a large number of synthetic data sets and its flexibility is demonstrated, especially when non-trilinear data sets are fit. In this case, the algorithm finds a solution with a better fit than direct trilinear decomposition (DTD). When trilinear data are used, the optimal fit is not as good as when a direct decomposition method is used. Most real data sets, however, have some degree of non-trilinearity. This makes this method a better choice to analyze non-trilinear, three-way data than direct trilinear decomposition.  相似文献   

6.
We present an approach to receive-mode broadband beam forming and jammer nulling for large adaptive antenna arrays as well as its efficient and compact optical implementation. This broadband efficient adaptive method for true-time-delay array processing (BEAMTAP) algorithm decreases the number of tapped delay lines required for processing an N-element phased-array antenna from N to only 2, producing an enormous savings in delay-line hardware (especially for large broadband arrays) while still providing the full NM degrees of freedom of a conventional N-element time-delay-and-sum beam former that requires N tapped delay lines with M taps each. This allows the system to adapt fully and optimally to an arbitrarily complex spatiotemporal signal environment that can contain broadband signals of interest, as well as interference sources and narrow-band and broadband jammers-all of which can arrive from arbitrary angles onto an arbitrarily shaped array-thus enabling a variety of applications in radar, sonar, and communication. This algorithm is an excellent match with the capabilities of radio frequency (rf) photonic systems, as it uses a coherent optically modulated fiber-optic feed network, gratings in a photorefractive crystal as adaptive weights, a traveling-wave detector for generating time delay, and an acousto-optic device to control weight adaptation. Because the number of available adaptive coefficients in a photorefractive crystal is as large as 10(9), these photonic systems can adaptively control arbitrarily large one- or two-dimensional antenna arrays that are well beyond the capabilities of conventional rf and real-time digital signal processing techniques or alternative photonic techniques.  相似文献   

7.
Covering arrays relax the condition of orthogonal arrays by only requiring that all combination of levels be covered but not requiring that the appearance of all combination of levels be balanced. This allows for a much larger number of factors to be simultaneously considered but at the cost of poorer estimation of the factor effects. To better understand patterns between sets of columns and evaluate the degree of coverage to compare and select between alternative arrays, we suggest several new graphical methods that show some of the patterns of coverage for different designs. These graphical methods for evaluating covering arrays are illustrated with some examples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Two-dimensional conformal arrays are proposed to enhance low contrast lesion detection deep in the body. The arrays conform to the body maintaining good contact over a large area. To provide full three-dimensional focusing for two-dimensional imaging, such arrays are densely sampled in the scan direction (x) and coarsely sampled in the nonscan direction (y), i.e., the arrays are anisotropic. To illustrate reduction in slice thickness with increased array length in y, a two-dimensional array is synthesized using a one-dimensional, 128 element array with a 3.5 MHz center frequency. A mask is attached confining transmission and reception of acoustic waves to 2 mm in y. Using a mechanical scan system, the one-dimensional array is moved along y covering a 28.16 mm×20.0 mm aperture. Accordingly, the synthetic array has 128 elements in x and 10 elements in y. To correct for geometric irregularities due to array movement, a gelatin based phantom containing three-dimensional point targets is used for phase aberration correction. Results show that elevational beam quality is degraded if small geometric errors are not removed. Emulated conformality at the body surface and phase aberrations induced by spatial inhomogeneities in tissue are further imposed and shown to produce severe beam-forming artifacts. Two-dimensional phase aberration correction is applied and results indicate that the method is adequate to compensate for large phase excursions across the entire array. To fully realize the potential of large, two-dimensional, conformal arrays, proper two-dimensional phase aberration correction methods are necessary  相似文献   

9.
We report progress in fabricating ultra-sensitive superconducting transition-edge sensors (TESs) for BLISS. BLISS is a suite of grating spectrometers covering 35–433?μm with R~700 cooled to 50?mK that is proposed to fly on the Japanese space telescope SPICA. The detector arrays for BLISS are TES bolometers readout with a time domain SQUID multiplexer. The required noise equivalent power (NEP) for BLISS is NEP=10?19?W/Hz1/2 with an ultimate goal of NEP=5×10?20?W/Hz1/2 to achieve background limited noise performance. The required and goal response times are τ=150?ms and τ=50?ms respectively to achieve the NEP at the required and goal optical chop frequency 1–5?Hz. We measured prototype BLISS arrays and have achieved NEP=6×10?18?W/Hz1/2 and τ=1.4?ms with a Ti TES (T C=565?mK) and NEP~2.5×10?19?W/Hz1/2 and τ~4.5?ms with an Ir TES (T C=130?mK). Dark power for these tests is estimated at 1–5?fW.  相似文献   

10.
We compare cost-efficient alternatives for the full factorial 24 design, the regular 25-1 fractional factorial design, and the regular 26-1 fractional factorial design that can fit the model consisting of all the main effects as well as all the two-factor interactions. For 4 and 5 factors we examine orthogonal arrays with 12 and 20 runs, respectively. For 6 factors we consider orthogonal arrays with 24 as well as 28 runs. We consult complete catalogs of two-level orthogonal arrays to find the ones that provide the most efficient estimation of all the effects in the model. We compare these arrays with D-optimal designs found using a coordinate exchange algorithm. The D-optimal designs are always preferable to the most efficient orthogonal arrays for fitting the full model in all the factors.  相似文献   

11.
Recently, Edwards curves have received a lot of attention in the cryptographic community due to their fast scalar multiplication algorithms. Then, many works on the application of these curves to pairing-based cryptography have been introduced. In this paper, we investigate refinements to Miller’s algorithm that play a central role in paring computation. We first introduce a variant of Miller function that leads to a more efficient variant of Miller’s algorithm on Edwards curves. Then, based on the new Miller function, we present a refinement to Miller’s algorithm that significantly improves the performance in comparison with the original Miller’s algorithm. Our analyses also show that the proposed refinement is approximately 25 % faster than Xu–Lin’s refinements (CT-RSA, 2010). Last but not least, our approach is generic, hence the proposed algorithms allow to compute both Weil and Tate pairings on pairing-friendly Edwards curves of any embedding degree.  相似文献   

12.
Insect climbing footpads are able to adhere to rough surfaces, but the details of this capability are still unclear. To overcome experimental limitations of randomly rough, opaque surfaces, we fabricated transparent test substrates containing square arrays of 1.4 µm diameter pillars, with variable height (0.5 and 1.4 µm) and spacing (from 3 to 22 µm). Smooth pads of cockroaches (Nauphoeta cinerea) made partial contact (limited to the tops of the structures) for the two densest arrays of tall pillars, but full contact (touching the substrate in between pillars) for larger spacings. The transition from partial to full contact was accompanied by a sharp increase in shear forces. Tests on hairy pads of dock beetles (Gastrophysa viridula) showed that setae adhered between pillars for larger spacings, but pads were equally unable to make full contact on the densest arrays. The beetles'' shear forces similarly decreased for denser arrays, but also for short pillars and with a more gradual transition. These observations can be explained by simple contact models derived for soft uniform materials (smooth pads) or thin flat plates (hairy-pad spatulae). Our results show that microstructured substrates are powerful tools to reveal adaptations of natural adhesives for rough surfaces.  相似文献   

13.
《Zeolites》1995,15(1):33-39
Microwave heating is applied to the synthesis of AlPO4-5. After 60 s of heating, large AlPO4-5 crystals are obtained. XRD, polarization microscopy, and adsorption measurements prove the regularity of the AFI framework. In an optimized microwave synthesis prismatic AlPO4-5 crystals up to 130 μm long and 40 μm thick could by synthesized. In a two-step synthesis, however, slightly smaller, but very uniform AlPO4-5 crystals with a narrow crystal size distribution without any amorphous or crystalline byproducts could be obtained. Several possible mechanisms for the fast crystallization within 60 s by microwave radiation are discussed: an increased dissolution of the gel by lonely water molecules, the almost T-gradient-free and, therefore, convection-free in situ heating, and the existence of organic-inorganic arrays as local microassemblies which could transform directly into the AFI framework.  相似文献   

14.
Thinning and weighting of large planar arrays by simulated annealing   总被引:1,自引:0,他引:1  
Two-dimensional arrays offer the potential for producing three-dimensional acoustic imaging. The major problem is the complexity arising from the large number of elements in such arrays. In this paper, a synthesis method is proposed that is aimed at designing an aperiodic sparse two-dimensional array to be used with a conventional beam-former. The stochastic algorithm of simulated annealing has been utilized to minimize the number of elements necessary to produce a spatial response that meets given requirements. The proposed method is highly innovative, as it can design very large arrays, optimize both positions and weight coefficients, synthesize asymmetric arrays, and generate array configurations that are valid for every steering direction. Several results are presented, showing notable improvements in the array characteristics and performances over those reported in the literature.  相似文献   

15.
《Materials Letters》2007,61(8-9):1859-1862
In the present study, the single-crystal Ni nanowire arrays with a preferred growth along the [110] direction have been prepared by the deposition of Ni into the alumina template with nanopores at a current density of 2.0 mA/cm2. The single-crystal Ni nanowire arrays show a magnetic anisotropy with the easy axis parallel to the nanowires and an enhanced coercivity as compared with the polycrystalline Ni nanowire arrays. A large coercivity of 1110 Oe together with a high remanence Mr = 0.92Ms is observed for 15-nm diameter single-crystal Ni nanowire arrays. The preferred growth mechanism of the single-crystal nanowires is briefly discussed.  相似文献   

16.
This paper reports the synthesis of Sn1?x Mn x O2 (for x=0, 0.01, 0.05 and 0.10) nanoparticles using the co-precipitation method. X-ray diffraction (XRD) results show that all samples are single phase with tetragonal crystalline structure. Rietveld refinements from XRD patterns show that the samples present particle average sizes of 4–30 nm confirmed by scanning electron microscopy. Magnetization results for SnO2 nanoparticles at 5% and 10% of Mn synthesized at 800?°C exhibit a ferromagnetic behavior at room temperature and an increasing of the magnetization for increasing doping concentration. On the other hand, samples synthesized at 300?°C are paramagnetic.  相似文献   

17.
A near-field, signal-redundancy algorithm for measuring phase-aberration profiles has been proposed previously. It is designed for arrays with a relatively large element size for which relatively narrow beams are transmitted and received. The algorithm measures the aberration profile by cross-correlating signals collected with the same midpoint position between transmitter and receiver, termed common midpoint signals, after a dynamic near-field delay correction. In this paper, a near-field signal-redundancy algorithm for small element arrays is proposed. In this algorithm, subarrays are formed of adjacent groups of elements to narrow the beams used to collect common midpoint signals and steer the beam direction, so that angle-dependent, phase-aberration profiles can be measured. There are several methods that could be used to implement the dynamic near-field delay correction on common midpoint signals collected with subarrays. In this paper, the similarity between common midpoint signals collected with these methods is also analyzed and compared using a so-called corresponding-signal concept. This analysis should be valid for general target distributions in the near field and wide-band signals.  相似文献   

18.
Video games comprise a multi-billion-dollar industry. Companies invest huge amounts of money for the release of their games. A part of this money is invested in testing the games. Current game testing methods include manual execution of pre-written test cases in the game. Each test case may or may not result in a bug. In a game, a bug is said to occur when the game does not behave per its intended design. The process of writing the test cases to test games requires standardization. We believe that this standardization can be achieved by implementing experimental design to video game testing. In this research, we discuss the implementation of combinatorial testing, specifically covering arrays, to test games. Combinatorial testing is a method of experimental design that is used to generate test cases and is primarily used for commercial software testing. In addition to the discussion of the implementation of combinatorial testing techniques in video game testing, we present an algorithm that can be used to sort test cases to aid developers in finding the combination of settings resulting in a bug.  相似文献   

19.
This paper presents a multicriteria approach to exploring the properties of timeout collaboration protocol with different timeout thresholds in general testing environments. This is formulated as a discrete multiple criteria optimisation problem by choosing five representative timeout thresholds as alternatives with five common performance measures of production systems. The PROMETHEE method is adopted to deal with this multicriteria problem. The divide-and-label algorithm is developed to rank all the alternatives with the overall intensity of their performance, by using multiple valued outranking graphs from the PROMETHEE with multiple replications. It is shown that two extreme timeout thresholds, T 0 = 0 and ∞, are efficient over multiple criteria in almost all cases. The divide-and-label algorithm is a very efficient approach to overcome the limitations of the PROMETHEE algorithm and Belz and Mertens's procedure with multiple criteria and replications.  相似文献   

20.
The theoretical Heisenberg magnet model and its solution given by Bethe and Hulthén (B.H.) known as Bethe Ansatz (BA) is widely applied in physics (solid state physics, quantum dots, statistical physics, high-temperatures superconductivity, low-dimensional systems, etc.), chemistry (polymers, organic metals and magnets), biology (biological molecular arrays and chains), etc. In most of the applications, the Heisenberg model is applied to infinite chains (asymptotic case), which is a good reality approximation for objects of macroscopic size. In such a case, the solutions of the model are well known. However, for objects of nanoscale size, one has to find solutions of the Heisenberg model of a finite chain consisting of N nodes. For such a chain, the problem of solving of B.H. equations is very complicated (because of the strange nonlinearity of these equations) even for very small objects N<20. Along with an increase in the length of the chain, mathematical difficulties in solving the equations increase combinatorially as 2N (combinatorial explosion). In such cases, even numerical methods are helpless. In our paper, we propose an approach in which numerical methods could be adapted to such a large numerical problem, as B.H. solutions for objects consisting of N>100, which responds to nanoscale physical or biological objects. This method is based on the ‘experimental’ observation that B.H. solutions change in a quasi-continuous way with respect to N.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号