首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The well-known Goldbach Conjecture (GC) states that any sufficiently large even number can be represented as a sum of two odd primes. Although not yet demonstrated, it has been checked for integers up to 1014. Using two stronger versions of the conjecture, we offer a simple and fast method for recognition of a gray box group G known to be isomorphic to Sn(or An) with knownn   20, i.e. for construction1of an isomorphism from G toSn (or An). Correctness and rigorous worst case complexity estimates rely heavily on the conjectures, and yield times of O([ρ + ν + μ ] n log2n) or O([ ρ + ν + μ ] n logn / loglog n) depending on which of the stronger versions of the GC is assumed to hold. Here,ρ is the complexity of generating a uniform random element of G, ν is the complexity of finding the order of a group element in G, and μ is the time necessary for group multiplication in G. Rigorous lower bound and probabilistic approach to the time complexity of the algorithm are discussed in the Appendix.  相似文献   

2.
3.
Let C be a curve of genus 2 and ψ1: C    E 1  a map of degree n, from C to an elliptic curveE1 , both curves defined over C. This map induces a degree n map φ1:P1    P 1  which we call a Frey–Kani covering. We determine all possible ramifications for φ1. If ψ1:C    E 1  is maximal then there exists a maximal map ψ2: C    E 2  , of degree n, to some elliptic curveE2 such that there is an isogeny of degree n2from the JacobianJC to E1 × E2. We say thatJC is (n, n)-decomposable. If the degree n is odd the pair (ψ2, E2) is canonically determined. For n =  3, 5, and 7, we give arithmetic examples of curves whose Jacobians are (n, n)-decomposable.  相似文献   

4.
The lowest-energy structures and stabilities of the heterodinuclear clusters, CNLin (n = 1–10) and relevant CNLin+ (n = 1–10) cations, are studied using the density functional theory with the 6-311 + G(3df) basis set. The CNLi6 and CNLi5+ clusters are the first three-dimensional ones in the CNLin0/+ series, respectively, and the CN group always caps the Lin0/+ moiety in the CNLin0/+ (n = 1–9) configurations. The CN triple bond is found to be completely cleaved in the CNLi100/+ clusters where the C and N atoms are bridged by two Li atoms. The CNLin (n = 2–10) clusters are hyperlithiated molecules with delocalized valence electrons and consequently possess low VIP values of 3.780–5.674 eV. Especially, the CNLi8 and CNLi10 molecules exhibit lower VIPs than that of Cs atom and can be regarded as heterobinuclear superalkali species. Furthermore, these two superalkali clusters show extraordinarily large first hyperpolarizabilities of 19,423 and 42,658 au, respectively. For the CNLin+ cationic species, the evolution of the energetic and electronic properties with the cluster size shows a special stability for CNLi2+.  相似文献   

5.
Nuisance blue-green algal blooms contribute to aesthetic degradation of water resources by means of accelerated eutrophication, taste and odor problems, and the production of toxins that can have serious adverse human health effects. Current field-based methods for detecting blooms are costly and time consuming, delaying management decisions. Methods have been developed for estimating phycocyanin concentration, the accessory pigment unique to freshwater blue-green algae, in productive inland water. By employing the known optical properties of phycocyanin, researchers have evaluated the utility of field-collected spectral response patterns for determining concentrations of phycocyanin pigments and ultimately blue-green algal abundance. The purpose of this research was to evaluate field spectroscopy as a rapid cyanobacteria bloom assessment method. In-situ field reflectance spectra were collected at 54 sampling sites on two turbid reservoirs on September 6th and 7th in Indianapolis, Indiana using ASD Fieldspec (UV/VNIR) spectroradiometers. Surface water samples were analyzed for in-vitro pigment concentrations and other physical and chemical water quality parameters. Semi-empirical algorithms by Simis et al. [Simis, S., Peters, S., Gons, H. (2005). Remote sensing of the cyanobacterial pigment phycocyanin in turbid inland water. American Society of Limnology and Oceanography 50(11): 237–245] were applied to the field spectra to predict chlorophyll a and phycocyanin absorption at 665 nm and 620 nm, respectively. For estimation of phycocyanin concentration, a specific absorption coefficient of 0.0070 m2 mg PC-1 for phycocyanin at 620 nm, aPC?(620), was employed, yielding an r2 value of 0.85 (n = 48, p < 0.0001), mean relative residual value of 0.51 (σ = 1.41) and root mean square error (RMSE) of 19.54 ppb. Results suggest this algorithm could be a robust model for estimating phycocyanin. Error is highest in water with phycocyanin concentrations of less than 10 ppb and where phycocyanin abundance is low relative to chlorophyll a. A strong correlation between measured phycocyanin concentrations and biovolume measurements of cyanobacteria was also observed (r = 0.89), while a weaker relationship (r = 0.66) resulted between chlorophyll a concentration and cyanobacterial biovolume.  相似文献   

6.
The hypercube Qn is one of the most popular networks. In this paper, we first prove that the n-dimensional hypercube is 2n  5 conditional fault-bipancyclic. That is, an injured hypercube with up to 2n  5 faulty links has a cycle of length l for every even 4  l  2n when each node of the hypercube is incident with at least two healthy links. In addition, if a certain node is incident with less than two healthy links, we show that an injured hypercube contains cycles of all even lengths except hamiltonian cycles with up to 2n  3 faulty links. Furthermore, the above two results are optimal. In conclusion, we find cycles of all possible lengths in injured hypercubes with up to 2n  5 faulty links under all possible fault distributions.  相似文献   

7.
《Information Sciences》2007,177(8):1782-1788
In this paper, we explore the 2-extra connectivity and 2-extra-edge-connectivity of the folded hypercube FQn. We show that κ2(FQn) = 3n  2 for n  8; and λ2(FQn) = 3n  1 for n  5. That is, for n  8 (resp. n  5), at least 3n  2 vertices (resp. 3n  1 edges) of FQn are removed to get a disconnected graph that contains no isolated vertices (resp. edges). When the folded hypercube is used to model the topological structure of a large-scale parallel processing system, these results can provide more accurate measurements for reliability and fault tolerance of the system.  相似文献   

8.
When conducting a comparison between multiple algorithms on multiple optimisation problems it is expected that the number of algorithms, problems and even the number of independent runs will affect the final conclusions. Our question in this research was to what extent do these three factors affect the conclusions of standard Null Hypothesis Significance Testing (NHST) and the conclusions of our novel method for comparison and ranking the Chess Rating System for Evolutionary Algorithms (CRS4EAs). An extensive experiment was conducted and the results were gathered and saved of k = 16 algorithms on N = 40 optimisation problems over n = 100 runs. These results were then analysed in a way that shows how these three values affect the final results, how they affect ranking and which values provide unreliable results. The influence of the number of algorithms was examined for values k = {4, 8, 12, 16}, number of problems for values N = {5, 10, 20, 40}, and number of independent runs for values n = {10, 30, 50, 100}. We were also interested in the comparison between both methods – NHST's Friedman test with post-hoc Nemenyi test and CRS4EAs – to see if one of them has advantages over the other. Whilst the conclusions after analysing the values of k were pretty similar, this research showed that the wrong value of N can give unreliable results when analysing with the Friedman test. The Friedman test does not detect any or detects only a small number of significant differences for small values of N and the CRS4EAs does not have a problem with that. We have also shown that CRS4EAs is an appropriate method when only a small number of independent runs n are available.  相似文献   

9.
《Information and Computation》2007,205(7):1078-1095
Assume that G = (V, E) is an undirected graph, and C  V. For every v  V, denote Ir(G; v) = {u  C: d(u,v)  r}, where d(u,v) denotes the number of edges on any shortest path from u to v in G. If all the sets Ir(G; v) for v  V are pairwise different, and none of them is the empty set, the code C is called r-identifying. The motivation for identifying codes comes, for instance, from finding faulty processors in multiprocessor systems or from location detection in emergency sensor networks. The underlying architecture is modelled by a graph. We study various types of identifying codes that are robust against six natural changes in the graph; known or unknown edge deletions, additions or both. Our focus is on the radius r = 1. We show that in the infinite square grid the optimal density of a 1-identifying code that is robust against one unknown edge deletion is 1/2 and the optimal density of a 1-identifying code that is robust against one unknown edge addition equals 3/4 in the infinite hexagonal mesh. Moreover, although it is shown that all six problems are in general different, we prove that in the binary hypercube there are cases where five of the six problems coincide.  相似文献   

10.
Insects and disease affect large areas of forest in the U.S. and Canada. Understanding ecosystem impacts of such disturbances requires knowledge of host species distribution patterns on the landscape. In this study, we mapped the distribution and abundance of host species for the spruce budworm (Choristoneura fumiferana) to facilitate landscape scale planning and modeling of outbreak dynamics. We used multi-temporal, multi-seasonal Landsat data and 128 ground truth plots (and 120 additional validation plots) to map basal area (BA), for 6.4 million hectares of forest in northern Minnesota and neighboring Ontario. Partial least-squares (PLS) regression was used to determine relationships between ground data and Landsat sensor data. Subsequently, BA was mapped for all forests, as well as for two specific host tree genera (Picea and Abies). These PLS regression analyses yielded estimates for overall forest BA with an R2 of 0.62 and RMSE of 4.67 m2 ha? 1 (20% of measured BA), white spruce relative BA with an R2 of 0.88 (RMSE = 12.57 m2 ha? 1 [20% of measured]), and balsam fir relative BA with an R2 of 0.64 (RMSE = 6.08 m2 ha? 1 [33% of measured]). We also used this method to estimate the relative BA of deciduous and coniferous species, each with R2 values of 0.86 and RMSE values of 9.89 m2 ha? 1 (23% of measured) and 9.78 m2 ha? 1 (16% of measured), respectively. Of note, winter imagery (with snow cover) and shortwave infrared-based indices – especially the shortwave infrared/visible ratio – strengthened the models we developed. Because ground measurements were made largely in forest stands containing spruce and fir, modeled results are not applicable to stands dominated by non-target conifers such as pines and cedar. PLS regression has proven to be an effective modeling tool for regional characterization of forest structure within spatially heterogeneous forests using multi-temporal Landsat sensor data.  相似文献   

11.
Let L = K(α) be an Abelian extension of degree n of a number field K, given by the minimal polynomial of α over K. We describe an algorithm for computing the local Artin map associated with the extension L / K at a finite or infinite prime v of K. We apply this algorithm to decide if a nonzero a  K is a norm from L, assuming that L / K is cyclic.  相似文献   

12.
《Applied ergonomics》2011,42(1):138-145
IntroductionSubjective workload measures are usually administered in a visual–manual format, either electronically or by paper and pencil. However, vocal responses to spoken queries may sometimes be preferable, for example when experimental manipulations require continuous manual responding or when participants have certain sensory/motor impairments. In the present study, we evaluated the acceptability of the hands-free administration of two subjective workload questionnaires – the NASA Task Load Index (NASA-TLX) and the Multiple Resources Questionnaire (MRQ) – in a surgical training environment where manual responding is often constrained.MethodSixty-four undergraduates performed fifteen 90-s trials of laparoscopic training tasks (five replications of 3 tasks – cannulation, ring transfer, and rope manipulation). Half of the participants provided workload ratings using a traditional paper-and-pencil version of the NASA-TLX and MRQ; the remainder used a vocal (hands-free) version of the questionnaires. A follow-up experiment extended the evaluation of the hands-free version to actual medical students in a Minimally Invasive Surgery (MIS) training facility.ResultsThe NASA-TLX was scored in 2 ways – (1) the traditional procedure using participant-specific weights to combine its 6 subscales, and (2) a simplified procedure – the NASA Raw Task Load Index (NASA-RTLX) – using the unweighted mean of the subscale scores. Comparison of the scores obtained from the hands-free and written administration conditions yielded coefficients of equivalence of r = 0.85 (NASA-TLX) and r = 0.81 (NASA-RTLX). Equivalence estimates for the individual subscales ranged from r = 0.78 (“mental demand”) to r = 0.31 (“effort”). Both administration formats and scoring methods were equally sensitive to task and repetition effects. For the MRQ, the coefficient of equivalence for the hands-free and written versions was r = 0.96 when tested on undergraduates. However, the sensitivity of the hands-free MRQ to task demands (ηpartial2 = 0.138) was substantially less than that for the written version (ηpartial2 = 0.252). This potential shortcoming of the hands-free MRQ did not seem to generalize to medical students who showed robust task effects when using the hands-free MRQ (ηpartial2 = 0.396). A detailed analysis of the MRQ subscales also revealed differences that may be attributable to a “spillover” effect in which participants’ judgments about the demands of completing the questionnaires contaminated their judgments about the primary surgical training tasks.ConclusionVocal versions of the NASA-TLX are acceptable alternatives to standard written formats when researchers wish to obtain global workload estimates. However, care should be used when interpreting the individual subscales if the object is to make comparisons between studies or conditions that use different administration modalities. For the MRQ, the vocal version was less sensitive to experimental manipulations than its written counterpart; however, when medical students rather than undergraduates used the vocal version, the instrument’s sensitivity increased well beyond that obtained with any other combination of administration modality and instrument in this study. Thus, the vocal version of the MRQ may be an acceptable workload assessment technique for selected populations, and it may even be a suitable substitute for the NASA-TLX.  相似文献   

13.
Aboveground dry biomass was estimated for the 1.3 M km2 forested area south of the treeline in the eastern Canadian province of Québec by combining data from an airborne and spaceborne LiDAR, a Landsat ETM+ land cover map, a Shuttle Radar Topographic Mission (SRTM) digital elevation model, ground inventory plots, and vegetation zone maps. Plot-level biomass was calculated using allometric relationships between tree attributes and biomass. A small footprint portable laser profiler then flew over these inventory plots to develop a generic airborne LiDAR-based biomass equation (R2 = 0.65, n = 207). The same airborne LiDAR system flew along four portions of orbits of the ICESat Geoscience Laser Altimeter System (GLAS). A square-root transformed equation was developed to predict airborne profiling LiDAR estimates of aboveground dry biomass from GLAS waveform parameters combined with an SRTM slope index (R2 = 0.59, n = 1325).Using the 104,044 quality-filtered GLAS pulses obtained during autumn 2003 from 97 orbits over the study area, we then predicted aboveground dry biomass for the main vegetation areas of Québec as well as for the entire Province south of the treeline. Including cover type covariances both within and between GLAS orbits increased standard errors of the estimates by two to five times at the vegetation zone level and as much as threefold at the provincial level. Aboveground biomass for the whole study area averaged 39.0 ± 2.2 (standard error) Mg ha? 1 and totalled 4.9 ± 0.3 Pg. Biomass distributions were 12.6% northern hardwoods, 12.6% northern mixedwood, 38.4% commercial boreal, 13% non-commercial boreal, 14.2% taiga, and 9.2% treed tundra. Non-commercial forests represented 36% of the estimated aboveground biomass, thus highlighting the importance of remote northern forests to C sequestration. This study has shown that space-based forest inventories of northern forests could be an efficient way of estimating the amount, distribution, and uncertainty of aboveground biomass and carbon stocks at large spatial scales.  相似文献   

14.
Computations of irregular primes and associated cyclotomic invariants were extended to all primes up to 12 million using multisectioning/convolution methods and a novel approach which originated in the study of Stickelberger codes Shokrollahi (1996). The latter idea reduces the problem to that of finding zeros of a polynomial overFpof degree  <  (p   1) / 2 among the quadratic nonresidues mod p. Use of fast polynomial gcd-algorithms gives anO (p log2p loglog p)-algorithm for this task. A more efficient algorithm, with comparable asymptotic running time, can be obtained by using Schönhage–Strassen integer multiplication techniques and fast multiple polynomial evaluation algorithms; this approach is particularly efficient when run on primes p for whichp   1 has small prime factors. We also give some improvements on previous implementations for verifying the Kummer–Vandiver conjecture and for computing the cyclotomic invariants of a prime.  相似文献   

15.
Fix a finite commutative ringR. Letuandvbe power series overR, withv(0) = 0. This paper presents an algorithm that computes the firstnterms of the compositionu(v), given the firstnterms ofuandv, inn1 + o(1)ring operations. The algorithm is very fast in practice whenRhas small characteristic.  相似文献   

16.
Diagnosis of reliability is an important topic for interconnection networks. Under the classical PMC model, Dahura and Masson [5] proposed a polynomial time algorithm with time complexity O(N2.5) to identify all faulty nodes in an N-node network. This paper addresses the fault diagnosis of so called bijective connection (BC) graphs including hypercubes, twisted cubes, locally twisted cubes, crossed cubes, and Möbius cubes. Utilizing a helpful structure proposed by Hsu and Tan [20] that was called the extending star by Lin et al. [24], and noting the existence of a structured Hamiltonian path within any BC graph, we present a fast diagnostic algorithm to identify all faulty nodes in O(N) time, where N = 2n, n ? 4, stands for the total number of nodes in the n-dimensional BC graph. As a result, this algorithm is significantly superior to Dahura–Masson’s algorithm when applied to BC graphs.  相似文献   

17.
We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovász, Factoring polynomials with rational coefficients, Math. Ann. 261 (1982) 515–534 towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2O(n), SLLL-reduction runs in O (n5 +ε) bit operations for every ε > 0, compared to O (n7 +ε) for the original LLL and to O (n6 +ε) for the LLL-algorithms of Schnorr, A more efficient algorithm for lattice reduction, Journal of Algorithm, 9 (1988) 47–62 and Storjohann, Faster Algorithms for Integer Lattice Basis Reduction. TR 249, Swiss Federal Institute of Technology, ETH-Zurich, Department of Computer Science, Zurich, Switzerland, July 1996. We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O (n3log n) arithmetic steps. Householder reflections are shown to provide better accuracy than Gram–Schmidt for orthogonalizing LLL-bases in floating point arithmetic.  相似文献   

18.
We introduce a GPU-based parallel vertex substitution (pVS) algorithm for the p-median problem using the CUDA architecture by NVIDIA. pVS is developed based on the best profit search algorithm, an implementation of vertex substitution (VS), that is shown to produce reliable solutions for p-median problems. In our approach, each candidate solution in the entire search space is allocated to a separate thread, rather than dividing the search space into parallel subsets. This strategy maximizes the usage of GPU parallel architecture and results in a significant speedup and robust solution quality. Computationally, pVS reduces the worst case complexity from sequential VS’s O(p · n2) to O(p · (n ? p)) on each thread by parallelizing computational tasks on GPU implementation. We tested the performance of pVS on two sets of numerous test cases (including 40 network instances from OR-lib) and compared the results against a CPU-based sequential VS implementation. Our results show that pVS achieved a speed gain ranging from 10 to 57 times over the traditional VS in all test network instances.  相似文献   

19.
Variation in the foliar chemistry of humid tropical forests is poorly understood, and airborne imaging spectroscopy could provide useful information at leaf and canopy scales. However, variation in canopy structure affects our ability to estimate foliar properties from airborne spectrometer data, yet these structural affects remain poorly quantified. Using leaf spectral (400–2500 nm) and chemical data collected from 162 Australian tropical forest species, along with partial least squares (PLS) analysis and canopy radiative transfer modeling, we determined the strength of the relationship between canopy reflectance and foliar properties under conditions of varying canopy structure.At the leaf level, chlorophylls, carotenoids and specific leaf area (SLA) were highly correlated with leaf spectral reflectance (r = 0.90–0.91). Foliar nutrients and water were also well represented by the leaf spectra (r = 0.79–0.85). When the leaf spectra were incorporated into the canopy radiative transfer simulations with an idealistic leaf area index (LAI) = 5.0, correlations between canopy reflectance spectra and leaf properties increased in strength by 4–18%. The effects of random LAI (= 3.0–6.5) variation on the retrieval of leaf properties remained minimal, particularly for pigments and SLA (r = 0.92–0.93). In contrast, correlations between leaf nitrogen (N) and canopy reflectance estimates decreased from r = 0.87 at constant LAI = 5 to r = 0.65 with randomly varying LAI = 3.0–6.5. Progressive increases in the structural variability among simulated tree crowns had relatively little effect on pigment, SLA and water predictions. However, N and phosphorus (P) were more sensitive to canopy structural variability. Our modeling results suggest that multiple leaf chemicals and SLA can be estimated from leaf and canopy reflectance spectroscopy, and that the high-LAI canopies found in tropical forests enhance the signal via multiple scattering. Finally, the two factors we found to most negatively impact leaf chemical predictions from canopy reflectance were variation in LAI and viewing geometry, which can be managed with new airborne technologies and analytical methods.  相似文献   

20.
We develop a theory of Gröbner bases over Galois rings, following the usual formulation for Gröbner bases over finite fields. Our treatment includes a division algorithm, a characterization of Gröbner bases, and an extension of Buchberger’s algorithm. One application is towards the problem of decoding alternant codes over Galois rings. To this end we consider the module M =  {(a, b) :aS  b  mod xr} of all solutions to the so-called key equation for alternant codes, where S is a syndrome polynomial. In decoding, a particular solution (Σ, Ω)   M is sought satisfying certain conditions, and such a solution can be found in a Gröbner basis of M. Applying techniques introduced in the first part of this paper, we give an algorithm which returns the required solution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号