首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 213 毫秒
1.
《Computers & Structures》1986,22(3):475-478
Starting from the Lagrange equations the generalized algebraic eigenproblem has been obtained. The notions of the statical compression measure density matrix as well as the coefficient α of the statical compression measure density (α-N) of the bar finite element have been introduced. On the basis of the above formulae the limit load P of the two steel structure supports of 110 kV electric lines has been calculated using the author's own CAD system named SLEN. After solving the generalized algebraic eigenproblem, the lowest eigenvalue α1 < 1 for all cases has been calculated. The average value of the α1, was about 0.7. Then the calculated limit load P was about 30% lower than the real limit load Po. For the technical conformity of the limit loads, calculated (P) and measured at the trial station (Po), the following has been proposed:
  • 1.(1) assumption of the statical compression measure density coefficient α ≠ 1,
  • 2.(2) assumption of the relation α = α1,
  • 3.(3) calculation of the coefficient α1 by the statistical identification process.
  相似文献   

2.
Dong Qiu  Lan Shu 《Information Sciences》2008,178(18):3595-3604
This paper generalizes a classical result about the space of bounded closed sets with the Hausdorff metric, and establishes the completeness of CB(X) with respect to the completeness of the metric space X, where CB(X) is the class of fuzzy sets with nonempty bounded closed α-cut sets, equipped with the supremum metric d which takes the supremum on the Hausdorff distances between the corresponding α-cut sets. In addition, some common fixed point theorems for fuzzy mappings are proved and two examples are given to illustrate the validity of the main results in fixed point theory.  相似文献   

3.
J.D. Stigter  K.J. Keesman 《Automatica》2004,40(8):1459-1464
The paper presents an optimal parametric sensitivity controller for estimation of a set of parameters in an experiment. The method is demonstrated for a fed-batch bioreactor case study for optimal estimation of the half-saturation constant KS and the parameter combination μmaxX/Y in which μmax is the maximum specific growth rate, X is the biomass concentration, and Y the yield coefficient. The resulting parametric sensitivity controller for the parameter KS is utilized in two sequential experiments using a ‘bang-bang-singular’ control strategy. Comparison with an optimal solution for the weighted sum of squared sensitivities for both parameters are compared with the individual cases where only one specific parametric output sensitivity is controlled. The parametric uncertainty is handled in a completely deterministic way as to arrive at a control law that maximizes the parametric output sensitivity.  相似文献   

4.
Let X be a part of an image to be analysed. Given two arbitrary points x and y of X, let us define the number dx(x, y) as follows: dx(x, y) is the lower bound of the lengths of the arcs in X ending at points x and y, if such arcs exist, and + α if not. The function dx is an X-intrinsic distance function, called ‘geodesic distance’. Note that if x and y belong to two disjoint connected components of X, dx(x, y) = + α. In other words, dx seems to be an appropriate distance function to deal with connectivity problems.In the metric space (X, dx), all the classical morphological transformations (dilation, erosion, skeletonization, etc.) can be defined. The geodesic distance dx also provides rigorous definitions of topological transformations, which can be performed by automatic image analysers with the help of parallel iterative algorithms.All these notions are illustrated by several examples (definition of the length of a fibre and of an effective length factor; automatic detection of cells having at least one nucleus or having one single nucleus; definitions of the geodesic center and of the ends of an object without a hole; etc.). The corresponding algorithms are described.  相似文献   

5.
We study the properties of possible static, spherically symmetric configurations in k-essence theories with the Lagrangian functions of the form F(X), X?,α ?,α. A no-go theorem has been proved, claiming that a possible black-hole-like Killing horizon of finite radius cannot exist if the function F(X) is required to have a finite derivative dF/dX. Two exact solutions are obtained for special cases of kessence: one for F(X) = F 0 X 1/3, another for F(X) = F 0|X|1/2 ? 2Λ, where F 0 and Λ are constants. Both solutions contain horizons, are not asymptotically flat, and provide illustrations for the obtained nogo theorem. The first solution may be interpreted as describing a black hole in an asymptotically singular space-time, while in the second solution two horizons of infinite area are connected by a wormhole.  相似文献   

6.
We present first principles enthalpies of formation and lattice parameters of iron, nickel, chromium and niobium alloys. Some of these results have been partially used in a recent assessment of the Fe-Ni-Cr-Nb quaternary phase diagram. Emphasis has been put on the fcc (A1) and bcc (A2) unary structures, the X3Y -D022,-L12,-D03,-D0a,X2Y -C14(MgZn2),-C15(MgCu2) and -C36 (MgNi2) Laves and X7Y 6-D85 (μ) binary phases, and the X8Y 4Z18-D8b (σ) ternary phase. We employed the state of the art to compute their properties by means of the DFT (PBE functional and PAW pseudo potentials). A comparison with experimental and theoretical data is also provided.  相似文献   

7.
Composite sampling may be used in industrial or environmental settings for the purpose of quality monitoring and regulation, particularly if the cost of testing samples is high relative to the cost of collecting samples. In such settings, it is often of interest to estimate the proportion of individual sampling units in the population that are above or below a given threshold value, C. We consider estimation of a proportion of the form p=P(X>C) from composite sample data, assuming that X follows a three-parameter gamma distribution. The gamma distribution is useful for modeling skewed data, which arise in many applications, and adding a shift parameter to the usual two-parameter gamma distribution also allows the analyst to model a minimum or baseline level of the response. We propose an estimator of p that is based on maximum likelihood estimates of the parameters α, β, and γ, and an associated variance estimator based on the observed information matrix. Theoretical properties of the estimator are briefly discussed, and simulation results are given to assess the performance of the estimator. We illustrate the proposed estimator using an example of composite sample data from the meat products industry.  相似文献   

8.
Let R be a commutative ring and let n ≥ 1. We study Γ(s), the generating function and Ann(s), the ideal of characteristic polynomials of s, an n-dimensional sequence over R .We express f(X1,…,Xn) · Γ(s)(X-11,…,X-1n) as a partitioned sum. That is, we give (i) a 2n-fold "border" partition (ii) an explicit expression for the product as a 2n-fold sum; the support of each summand is contained in precisely one member of the partition. A key summand is βo(f, s), the "border polynomial" of f and s, which is divisible by X1Xn.We say that s is eventually rectilinear if the elimination ideals Ann(s)∩R[Xi] contain an fi (Xi) for 1 ≤ in. In this case, we show that Ann(s) is the ideal quotient (ni=1(fi) : βo(f, s)/(X1 … Xn )).When R and R[[X1, X2 ,…, Xn]] are factorial domains (e.g. R a principal ideal domain or F [X1,…, Xn]), we compute the monic generator γi of Ann(s) ∩ R[Xi] from known fi ϵ Ann(s) ∩ R[Xi] or from a finite number of 1-dimensional linear recurring sequences over R. Over a field F this gives an O(ni=1 δγ3i) algorithm to compute an F-basis for Ann(s).  相似文献   

9.
Accurate detection of heavy metal-induced stress on the growth of crops is essential for agricultural ecological environment and food security. This study focuses on exploring singularity parameters as indicators for a crop's Zn stress level assessment by applying wavelet analysis to the hyperspectral reflectance. The field in which the experiment was conducted is located in the Changchun City, Jilin Province, China. The hyperspectral and biochemistry data from four crops growing in Zn contaminated soils: rice, maize, soybean and cabbage were collected. We performed a wavelet transform to the hyperspectral reflectance (350-1300 nm), and explored three categories of singularity parameters as indicators of crop Zn stress, including singularity range (SR), singularity amplitude (SA) and a Lipschitz exponent (α). The results indicated that (i) the wavelet coefficient of the fifth decomposition level by applying Daubechies 5 (db5) mother wavelets proved successful for identifying crop Zn stress; the SR of crop concentrated on the region was around 550-850 nm of the spectral signal under Zn stress; (ii) the SR stabilized, but SA and α had developed some variations at the growth stages of the crop; (iii) the SR, SA and α were found among four crop species differentially; and moreover the SA increased in relation to an increase in the SR of crop species; (iv) the α had a strong non-linear relationship with the Zn concentration (R2:0.7601-0.9451); the SA had a strong linear relationship with Zn concentration (R2:0.5141-0.8281). Singularity parameters can be used as indicators for a crop's Zn stress level as well as offer a quantitative analysis of the singularity of spectrum signal. The wavelet transform technique has been shown to be very promising in detecting crops with heavy metal stress.  相似文献   

10.
E. P. Glinert  E. Katz 《Computing》1979,23(4):381-391
We describe an algorithm which enables us to compute the homology of Ω(X 1 ?X 2) in terms of the homologies of ΩX 1 and ΩX 2 (where ΩX is the loop space ofX). A computer program implementing this algorithm is then presented.  相似文献   

11.
Cluster randomization trials are increasingly popular among healthcare researchers. Intact groups (called ‘clusters’) of subjects are randomized to receive different interventions, and all subjects within a cluster receive the same intervention. In cluster randomized trials, a cluster is the unit of randomization, and a subject is the unit of analysis. Variation in cluster sizes can affect the sample size estimate or the power of the study. [Guittet, L., Ravaud, P., Giraudeau, B., 2006. Planning a cluster randomized trial with unequal cluster sizes: Practical issues involving continuous outcomes. BMC Medical Research Methodology 6 (17), 1-15] investigated the impact of an imbalance in cluster size on the power of trials with continuous outcomes through simulations. In this paper, we examine the impact of cluster size variation and intracluster correlation on the power of the study for binary outcomes through simulations. Because the sample size formula for cluster randomization trials is based on a large sample approximation, we evaluate the performance of the sample size formula with small sample sizes through simulation. Simulation study findings show that the sample size formula (mp) accounting for unequal cluster sizes yields empirical powers closer to the nominal power than the sample size formula (ma) for the average cluster size method. The differences in sample size estimates and empirical powers between ma and mp get smaller as the imbalance in cluster sizes gets smaller.  相似文献   

12.
In this paper we give some properties of interval operatorsF which guarantee the convergence of the interval sequence {X k} defined byX k+1:=F(Xk)∩Xk to a unique fixed interval \(\hat X\) . This interval \(\hat X\) encloses the “zero-set”X * of a function strip \(G(x): = [g(x),\bar g(x)]\) . for some known interval operators we investigate under which assumptions these properties are valid.  相似文献   

13.
Genome-wide association studies are likely to be conducted in large scale in the near future. In such studies, searching over hundreds of thousands of markers for the few ones that are associated with disease brings out the multiple-hypothesis testing problem in its severe form. We explore, in a two-stage design, how the use of false discovery rate (FDR) can alleviate the burden of a prohibitively strict significance level for single marker tests and still control the number of false positive findings, when there is more than one causal variant. FDR is the expected proportion of false positives among all significant findings. It can be approximated by (1-p0)α/[(1-p0)α+p0(1-β)], where p0 is the proportion of true causal markers, α is the type I error rate and 1-β the power of a two-stage study. When 500,000 SNPs are genotyped in the first stage with fixed SNP array and the most significant SNPs are genotyped in the second stage with standard but 20 times more expensive high-throughput techniques, up to 20% savings in the minimum genotyping cost is achieved for p0 in the range of 10-5 to 5×10-4 and FDR in the range of 0.05 to 0.7, compared to when Bonferroni-corrected significance level is used. In terms of sample size, the saving is up to 60%. However, these savings come at a cost of more false positive findings.  相似文献   

14.
H. Cornelius  R. Lohner 《Computing》1984,33(3-4):331-347
Given a continuous functionf:D→? on a compact interval \(D \subseteq \mathbb{R}\) we consider the problem of finding an intervalV(f, X) that contains the range of the values off,W(f, X)={f(x)‖x∈X} on a subintervalX?D. To reach this goal we use methods from interval-arithmetic. WhenV(f, X) is computed by one of the well-known methods from literature for a sequence {X n } of intervals with decreasing diametersd(X n )→0, then generally the overestimation ofW(f, X n ) byV(f, X n ) will decrease at most quadratically withd(X n ). The method presented in this paper, however, allows the computation ofV(f, X n ) such that this overestimation decreases with an arbitrary powers>0 ofd(X n ). Theoretically any powers∈? is possible, in practice, however, 1≤s≤4 can be reached with little or moderate amount of work ands=5 ors=6 with some more work. A generalization to functionsf: ? n ?? is given at the end of the paper.  相似文献   

15.
Traditional technology acceptance model (TAM) studies establish and verify the model of causal relationship between variables by factor analysis or structural equation modeling. However, some technology is highly complicated, not all respondents have thorough comprehension. Certain variables are not compatible with assumption of independence, and causal relationship cannot be analyzed accurately if mass samplings are difficult to obtain, resulting in mistaken conclusions. The study establishes TAM through the Decision Making Trial and Evaluation Laboratory (DEMATEL) method, which considers the influences of inconformity between variables. Respondents may completely understand the technology, but may not adequately express it through limitations of mass sampling. Score quantification through traditional investigation asks respondents to make a choice from limited wordings in order to stress maximum attribution without considering the fuzzy thinking of humans, resulting in an imprecise summary. This study adopts the fuzzy DEMATEL method to calculate the causal relationship and level of mutual effect, building on the technology acceptance model by applying the Product Life Cycle Management (PLM) system, providing administrator references to improve promotion of new technology to solve complicated and difficult problems in practice. The example of Product Life Cycle Management adopted by the Taiwan optronics manufacturing industry is used to explain the application and effect of this theory. The research found that the influence is similar to the TAM2 model based on the fuzzy DEMATEL method. The major difference is the subjective standard (X5) did not affect the impression (X8), while the experience (X6) directly affects the purpose of use (X1) and the purpose of use (X3) which also affects useful knowledge (X2).  相似文献   

16.
This paper proposes a two-stage feedforward neural network (FFNN) based approach for modeling fundamental frequency (F0) values of a sequence of syllables. In this study, (i) linguistic constraints represented by positional, contextual and phonological features, (ii) production constraints represented by articulatory features and (iii) linguistic relevance tilt parameters are proposed for predicting intonation patterns. In the first stage, tilt parameters are predicted using linguistic and production constraints. In the second stage, F0 values of the syllables are predicted using the tilt parameters predicted from the first stage, and basic linguistic and production constraints. The prediction performance of the neural network models is evaluated using objective measures such as average prediction error (μ), standard deviation (σ) and linear correlation coefficient (γX,Y). The prediction accuracy of the proposed two-stage FFNN model is compared with other statistical models such as Classification and Regression Tree (CART) and Linear Regression (LR) models. The prediction accuracy of the intonation models is also analyzed by conducting listening tests to evaluate the quality of synthesized speech obtained after incorporation of intonation models into the baseline system. From the evaluation, it is observed that prediction accuracy is better for two-stage FFNN models, compared to the other models.  相似文献   

17.
The solution of Schrödinger's equation leads to a high number N of independent variables. Furthermore, the restriction to (anti)symmetric functions implies some complications. We propose a sparse-grid approximation which leads to a set of non-orthogonal basis. Due to the antisymmetry, scalar products are expressed by sums of N×N-determinants. Because of the sparsity of the sparse-grid approximation, these determinants can be reduced from N×N to a much smaller size K×K. The sums over all permutations reduce to the quantities det K 1,…,α K ):=∑≤i 1,i 2,…,i K Ndet(a ,i β (αβ))α,β=1,…, K to be determined, where a i , j (αβ) are certain one-dimensional scalar products involving (sparse-grid) basis functions ?αβ. We propose a method to evaluate this expression such that the asymptotics of the computational cost with respect to N is O(N 3) for fixed K, while the storage requirements increase only with the factor N 2. Furthermore, we describe a parallel version (N processors) with full speed up.  相似文献   

18.
In a graph G=(V,E), a bisection (X,Y) is a partition of V into sets X and Y such that |X|?|Y|?|X|+1. The size of (X,Y) is the number of edges between X and Y. In the Max Bisection problem we are given a graph G=(V,E) and are required to find a bisection of maximum size. It is not hard to see that ⌈|E|/2⌉ is a tight lower bound on the maximum size of a bisection of G.We study parameterized complexity of the following parameterized problem called Max Bisection above Tight Lower Bound (Max-Bisec-ATLB): decide whether a graph G=(V,E) has a bisection of size at least ⌈|E|/2⌉+k, where k is the parameter. We show that this parameterized problem has a kernel with O(k2) vertices and O(k3) edges, i.e., every instance of Max-Bisec-ATLB is equivalent to an instance of Max-Bisec-ATLB on a graph with at most O(k2) vertices and O(k3) edges.  相似文献   

19.
Given a Gaussian random walk X with drift, we consider the problem of estimating its first-passage time ?? A for a given level A from an observation process Y correlated to X. Estimators may be any stopping times ?? with respect to the observation process Y. Two cases of the process Y are considered: a noisy version of X and a process X with delay d. For a given loss function f(x), in both cases we find exact asymptotics of the minimal possible risk E f((?? ? ?? A )/r) as A, d ?? ??, where r is a normalizing coefficient. The results are extended to the corresponding continuous-time setting where X and Y are Brownian motions with drift.  相似文献   

20.
Let F1,…,FsR[X1,…,Xn] be polynomials of degree at most d, and suppose that F1,…,Fs are represented by a division free arithmetic circuit of non-scalar complexity size L. Let A be the arrangement of Rn defined by F1,…,Fs.For any point xRn, we consider the task of determining the signs of the values F1(x),…,Fs(x) (sign condition query) and the task of determining the connected component of A to which x belongs (point location query). By an extremely simple reduction to the well-known case where the polynomials F1,…,Fs are affine linear (i.e., polynomials of degree one), we show first that there exists a database of (possibly enormous) size sO(L+n) which allows the evaluation of the sign condition query using only (Ln)O(1)log(s) arithmetic operations. The key point of this paper is the proof that this upper bound is almost optimal.By the way, we show that the point location query can be evaluated using dO(n)log(s) arithmetic operations. Based on a different argument, analogous complexity upper-bounds are exhibited with respect to the bit-model in case that F1,…,Fs belong to Z[X1,…,Xn] and satisfy a certain natural genericity condition. Mutatis mutandis our upper-bound results may be applied to the sparse and dense representations of F1,…,Fs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号