首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Let Ω be a polygonal domain in Rn, τh an associated triangulation and uh the finite element solution of a well-posed second-order elliptic problem on (Ω, τh). Let M = {Mi}p + qi = 1 be the set of nodes which defines the vertices of the triangulation τh: for each i,Mi = {xil¦1 ? l ?n} in Rn. The object of this paper is to provide a computational tool to approximate the best set of positions M? of the nodes and hence the best triangulation \?gth which minimizes the solution error in the natural norm associated with the problem.The main result of this paper are theorems which provide explicit expressions for the partial derivatives of the associated energy functional with respect to the coordinates xil, 1 ? l ? n, of each of the variable nodes Mi, i = 1,…, p.  相似文献   

2.
It is well-recognized that the main factor that hinders the applications of Association Rules (ARs) is the huge number of ARs returned by the mining process. In this paper, we propose an effective solution that presents concise mining results by eliminating the redundancy in the set of ARs. We adopt the concept of δ tolerance to define the set of δ-Tolerance ARs (δ-TARs), which is a concise representation for the set of ARs. The notion of δ-tolerance is a relaxation on the closure defined on the support of frequent itemsets, thus allowing us to effectively prune the redundant ARs. We devise a set of inference rules, with which we prove that the set of δ-TARs is a non-redundant representation of ARs. In addition, we prove that the set of ARs that is derived from the δ-TARs by the inference rules is sound and complete. We also develop a compact tree structure called the δ-TAR tree, which facilitates the efficient generation of the δ-TARs and derivation of other ARs. Experimental results verify the efficiency of using the δ-TAR tree to generate the δ-TARs and to query the ARs. The set of δ-TARs is shown to be significantly smaller than the state-of-the-art concise representations of ARs. In addition, the approximation on the support and confidence of the ARs derived from the δ-TARs are highly accurate.  相似文献   

3.
Phase equilibria and thermodynamic properties of the KCl–K2CO3–NaCl–Na2CO3 system were analyzed on the basis of the thermodynamic evaluation of the KCl–NaCl,KCl–K2CO3,NaCl–Na2CO3,K2CO3–Na2CO3 and KCl–K2CO3–NaCl–Na2CO3 systems. The Gibbs energies of individual phases was approximated by two-sublattice models for ionic liquids and crystals. Most of the experimental information was well described by the present set of thermodynamic parameters. The lowest monovariant eutectic temperature in the KCl–NaCl–Na2CO3 system is located at 573 °C, with a composition of XNa2CO3=0.31,XKCl=0.35 and XNaCl=0.34.  相似文献   

4.
The two dimensional range minimum query problem is to preprocess a static m by n matrix (two dimensional array) A of size N=mn, such that subsequent queries, asking for the position of the minimum element in a rectangular range within A, can be answered efficiently. We study the trade-off between the space and query time of the problem. We show that every algorithm enabled to access A during the query and using a data structure of size O(N/c) bits requires Ω(c) query time, for any c where 1≤cN. This lower bound holds for arrays of any dimension. In particular, for the one dimensional version of the problem, the lower bound is tight up to a constant factor. In two dimensions, we complement the lower bound with an indexing data structure of size O(N/c) bits which can be preprocessed in O(N) time to support O(clog 2 c) query time. For c=O(1), this is the first O(1) query time algorithm using a data structure of optimal size O(N) bits. For the case where queries can not probe A, we give a data structure of size O(N⋅min {m,log n}) bits with O(1) query time, assuming mn. This leaves a gap to the space lower bound of Ω(Nlog m) bits for this version of the problem.  相似文献   

5.
A relatively simple mathematical procedure for the reconstruction of the 3-dimensional (3D) image of the left ventricle (LV) of the heart is presented. The method is based on the assumption that every ray whoch emanates from the midpoint of the long axis of the 3D body crosses the surface boundary of the ventricle at one and only one point. The coordinates ri, φi, θi of the data points on, say, the outer boundary, (i.e., the epicardium) are calculated in a spherical coordinate system having its origin in the midpoint of the long axis. The problem of defining the coordinates of a prescribed grid point on the boundary is treated as an interpolation problem for the function r = r(φ, θ), defined in the rectangle 0 ≤ φ ≤ 2π; 0 ≤ θπ with ri given in the points (φi, θi).  相似文献   

6.
A new Mn(II) complex of MnL2Cl2 (L = azino-di(5,6-azafluorene)-κ2-NN′) was synthesized and utilized as an electrochemical indicator for the determination of hepatitis B virus (HBV) based on its interaction with MnL2Cl2. The electrochemical behavior of interaction of MnL2Cl2 with salmon sperm DNA was investigated on glassy carbon electrode (GCE). In the presence of salmon sperm DNA, the peak current of [MnL2]2+ was decreased and the peak potential was shifted positively without appearance of new peaks. The binding ratio between [MnL2]2+ and salmon sperm DNA was calculated to be 2:1 and the binding constant was 3.72 × 108 mol2 L−2. The extent of hybridization was evaluated on the basis of the difference between signals of [MnL2]2+ with probe DNA before and after hybridization with complementary sequence. Control experiments performed with non-complementary and mismatch sequence demonstrated the good selectivity of the biosensor. With this approach, a sequence of the HBV could be quantified over the range from 1.76 × 10−8 to 1.07 × 10−6 mol L−1, with a linear correlation of r = 0.9904 and a detection limit of 6.80 × 10−9 mol L−1. Additionally, the binding mechanism was preliminarily discussed. The mode of interaction between MnL2Cl2 and DNA was found to be primary intercalation binding.  相似文献   

7.
Given a parametric polynomial family p(s; Q) := {n k=0 ak (q)sk : q ] Q}, Q R m , the robust root locus of p(s; Q) is defined as the two-dimensional zero set p,Q := {s ] C:p(s; q) = 0 for some q ] Q}. In this paper we are concerned with the problem of generating robust root loci for the parametric polynomial family p(s; E) whose polynomial coefficients depend polynomially on elements of the parameter vector q ] E which lies in an m-dimensional ellipsoid E. More precisely, we present a computational technique for testing the zero inclusion/exclusion of the value set p(z; E) for a fixed point z in C, and then apply an integer-labelled pivoting procedure to generate the boundary of each subregion of the robust root locus p,E . The proposed zero inclusion/exclusion test algorithm is based on using some simple sufficient conditions for the zero inclusion and exclusion of the value set p(z,E) and subdividing the domain E iteratively. Furthermore, an interval method is incorporated in the algorithm to speed up the process of zero inclusion/exclusion test by reducing the number of zero inclusion test operations. To illustrate the effectiveness of the proposed algorithm for the generation of robust root locus, an example is provided.  相似文献   

8.
We consider a variant of the classical Longest Common Subsequence problem called Doubly-Constrained Longest Common Subsequence (DC-LCS). Given two strings s1 and s2 over an alphabet Σ, a set Cs of strings, and a function Co:ΣN, the DC-LCS problem consists of finding the longest subsequence s of s1 and s2 such that s is a supersequence of all the strings in Cs and such that the number of occurrences in s of each symbol σΣ is upper bounded by Co(σ). The DC-LCS problem provides a clear mathematical formulation of a sequence comparison problem in Computational Biology and generalizes two other constrained variants of the LCS problem that have been introduced previously in the literature: the Constrained LCS and the Repetition-Free LCS. We present two results for the DC-LCS problem. First, we illustrate a fixed-parameter algorithm where the parameter is the length of the solution which is also applicable to the more specialized problems. Second, we prove a parameterized hardness result for the Constrained LCS problem when the parameter is the number of the constraint strings (|Cs|) and the size of the alphabet Σ. This hardness result also implies the parameterized hardness of the DC-LCS problem (with the same parameters) and its NP-hardness when the size of the alphabet is constant.  相似文献   

9.
A highly sensitive hydrazine sensor was developed based on the electrodeposition of gold nanoparticles onto the choline film modified glassy carbon electrode (GNPs/Ch/GCE). The electrochemical experiments showed that the GNPs/Ch film exhibited a distinctly higher activity for the electro-oxidation of hydrazine than GNPs with 3.4-fold enhancement of peak current. The kinetic parameters such as the electron transfer coefficient (α) and the rate of electron exchange (k) for the oxidation of hydrazine were determined. The diffusion coefficient (D) of hydrazine in solution was also calculated by chronoamperometry. The sensor exhibited two wide linear ranges of 5.0 × 10−7-5.0 × 10−4 and 5.0 × 10−4-9.3 × 10−3 M with the detection limit of 1.0 × 10−7 M (s/n = 3). The proposed electrode presented excellent operational and storage stability for the determination of hydrazine. Moreover, the sensor showed outstanding sensitivity, selectivity and reproducibility properties. All the results indicated a good potential application of this sensor in the detection of hydrazine.  相似文献   

10.
The diameter of a graph is an important factor for communication as it determines the maximum communication delay between any pair of processors in a network. Graham and Harary [N. Graham, F. Harary, Changing and unchanging the diameter of a hypercube, Discrete Applied Mathematics 37/38 (1992) 265-274] studied how the diameter of hypercubes can be affected by increasing and decreasing edges. They concerned whether the diameter is changed or remains unchanged when the edges are increased or decreased. In this paper, we modify three measures proposed in Graham and Harary (1992) to include the extent of the change of the diameter. Let D-k(G) is the least number of edges whose addition to G decreases the diameter by (at least) k, D+0(G) is the maximum number of edges whose deletion from G does not change the diameter, and D+k(G) is the least number of edges whose deletion from G increases the diameter by (at least) k. In this paper, we find the values of D-k(Cm), D-1(Tm,n), D-2(Tm,n), D+1(Tm,n), and a lower bound for D+0(Tm,n) where Cm be a cycle with m vertices, Tm,n be a torus of size m by n.  相似文献   

11.
The well-known Goldbach Conjecture (GC) states that any sufficiently large even number can be represented as a sum of two odd primes. Although not yet demonstrated, it has been checked for integers up to 1014. Using two stronger versions of the conjecture, we offer a simple and fast method for recognition of a gray box group G known to be isomorphic to Sn(or An) with knownn   20, i.e. for construction1of an isomorphism from G toSn (or An). Correctness and rigorous worst case complexity estimates rely heavily on the conjectures, and yield times of O([ρ + ν + μ ] n log2n) or O([ ρ + ν + μ ] n logn / loglog n) depending on which of the stronger versions of the GC is assumed to hold. Here,ρ is the complexity of generating a uniform random element of G, ν is the complexity of finding the order of a group element in G, and μ is the time necessary for group multiplication in G. Rigorous lower bound and probabilistic approach to the time complexity of the algorithm are discussed in the Appendix.  相似文献   

12.
《国际计算机数学杂志》2012,89(10):1287-1293
A class of numerical methods is proposed for solving general third-order ordinary differential equations directly by collocation at the grid points x = x n+j , i = 0(1)k and at an off grid point x = x n+u , where k is the step number of the method and u is an arbitrary rational number in (x n , x n+k ). A predictor of order 2k ? 1 is also proposed to cater for y n+k in the main method. Taylor series expansion is employed for the calculation of y n+1, y n+2, y n+u and their higher derivatives. Evaluation of the resulting method at x = x n+k for any value of u in the specified open interval yields a particular discrete scheme as a special case of the method. The efficiency of the method is tested on some general initial value problems of third-order ordinary differential equations.  相似文献   

13.
A compact tubular sensor based on NASICON (sodium super ionic conductor) and V2O5-doped TiO2 sensing electrode was designed for the detection of SO2. In order to reduce the size of the sensor, a thick-film of NASICON was formed on the outer surface of a small Al2O3 tube; furthermore, a thin layer of V2O5-doped TiO2 with nanometer size was attached on the NASICON as a sensing electrode. This paper investigated the influence of V2O5 doping and sintering temperature on the characteristics of the sensor. The sensor attached with 5 wt% V2O5-doped TiO2 sintered at 600 °C exhibited excellent sensing properties to 1–50 ppm SO2 in air at 200–400 °C. The EMF value of the sensor was almost proportional to the logarithm of SO2 concentration and the sensitivity (slope) was −78 mV/decade at 300 °C. It was also seen that the sensor showed a good selectivity to SO2 against NO, NO2, CH4, CO, NH3 and CO2. Moreover, the sensor had speedy response kinetics to SO2 too, the 90% response time to 50 ppm SO2 was 10 s, and the recovery time was 35 s. On the basis of XPS analysis for the SO2-adsorbed sensing electrode, a sensing mechanism involving the mixed potential at the sensing electrode was proposed.  相似文献   

14.
In the framework of the general problem for clarifying the stability of the zero solution of the equation x(n) = a 1 x(nm) – a 2 x(nk) with delays k and m, some partial problems are solved. An appreciable dependence of the stability on the divisibility of one delay by the other is revealed.  相似文献   

15.
16.
In this paper we analyze the average-case performance of the Modified Harmonic algorithm for on-line bin packing. We first analyze the average-case performance for arbitrary distribution of item sizes over (0,1]. This analysis is based on the following result. Letf 1 andf 2 be two linear combinations of random variables {N i } i=1 k where theN i 's have a joint multinomial distribution for eachn i=1 k ,N i . LetE(f 1) ≠ O andE(f 2)≠ 0. Then limn E(max(f 1,f 2 ))/n = lim n →∞ max(E(f 1),E(f 2))/n. We then consider the special case when the item sizes are uniformly distributed over (0,1]. For specific values of the parameters, the Modified Harmonic algorithm turns out to be better than the other two linear-time on-line algorithms—Next Fit and Harmonic—in both the worst case as well as the average case. We also obtain optimal values for the parameters of the algorithm from the average-case standpoint. For these values of the parameters, the average-case performance ratio is less than 1.19. This compares well with the performance ratios 1.333. and 1.2865. of the Next Fit algorithm and the Harmonic algorithm, respectively.  相似文献   

17.
Alternating-time temporal logic (atl) is a logic for reasoning about open computational systems and multi-agent systems. It is well known that atl model checking is linear in the size of the model. We point out, however, that the size of an atl model is usually exponential in the number of agents. When the size of models is defined in terms of states and agents rather than transitions, it turns out that the problem is (1) Δ 3 P -complete for concurrent game structures, and (2) Δ 2 P -complete for alternating transition systems. Moreover, for “Positive atl” that allows for negation only on the level of propositions, model checking is (1) Σ 2 P -complete for concurrent game structures, and (2) NP-complete for alternating transition systems. We show a nondeterministic polynomial reduction from checking arbitrary alternating transition systems to checking turn-based transition systems, We also discuss the determinism assumption in alternating transition systems, and show that it can be easily removed. In the second part of the paper, we study the model checking complexity for formulae of atl with imperfect information (atl ir ). We show that the problem is Δ 2 P -complete in the number of transitions and the length of the formula (thereby closing a gap in previous work of Schobbens in Electron. Notes Theor. Comput. Sci. 85(2), 2004). Then, we take a closer look and use the same fine structure complexity measure as we did for atl with perfect information. We get the surprising result that checking formulae of atl ir is also Δ 3 P -complete in the general case, and Σ 2 P -complete for “Positive atl ir ”. Thus, model checking agents’ abilities for both perfect and imperfect information systems belongs to the same complexity class when a finer-grained analysis is used.  相似文献   

18.
The mean probability of correct classification (Pcr) is calculated over a collection of equiprobable two-class Gaussian problems with a common covariance matrix for each problem. The Bayes minimum error classification rule, in which the unbiased estimates of the mean vectors and covariance matrices are used in place of the true values, is the classification rule considered. The variation of Pcr with the dimensionality N is investigated for three interesting cases with different complexities. In the first case all the parameters of the class-conditional densities are known. For the second case the common covariance matrix is assumed known and only the mean vectors need to be estimated, while all the parameters need to be estimated in the last case. For these three cases the relationship between Pcr and N is plotted for a specific collection of problems. For the case of finite sample size, peaking of Pcr with N is encountered in most of the cases considered.  相似文献   

19.
The hypercube is one of the most versatile and efficient interconnection networks (networks for short) so far discovered for parallel computation. Let f denote the number of faulty vertices in an n-cube. This study demonstrates that when f ? n − 2, the n-cube contains a fault-free path with length at least 2n − 2f − 1 (or 2n − 2f − 2) between two arbitrary vertices of odd (or even) distance. Since an n-cube is a bipartite graph with two partite sets of equal size, the path is longest in the worst-case. Furthermore, since the connectivity of an n-cube is n, the n-cube cannot tolerate n − 1 faulty vertices. Hence, our result is optimal.  相似文献   

20.
In a simple multidimensional model we study the possibility of accelerated expansion of a 3-dimensional subspace combined with variation of the effective 4-dimensional constant of gravity within experimental constraints. Multidimensional cosmological solutions with m 2-form fields and l scalar fields are presented. Solutions corresponding to rank-3 Lie algebras are singled out and discussed. Each of the solutions contains two factor spaces: the one-dimensional space M 1 and the Ricci-flat space M 2. A 3-dimensional subspace of M 2 is interpreted as our space. We show that, if at least one of the scalar fields is of phantom nature, there exists a time interval where accelerated expansion of our 3D space is compatible with a small enough variation of the effective gravitational constant G(τ) (τ is the cosmological time). This interval contains τ 0 at which G(gt) has a minimum. Special solutions with three phantom scalar fields are analyzed. It is shown that in the vicinity of τ 0 the time variation of G(τ) decreases in the sequence of Lie algebras A 3, C 3 and B 3 in the family of solutions with asymptotic power-law behavior of the scale-factors as τ → ∞. Exact solutions with asymptotically exponential accelerated expansion of 3D space are also considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号