Landslide is a major geo-environmental hazard which imparts serious threat to lives and properties. The slope failures are due to adverse inherent geological conditions triggered by an external factor. This paper proposes a new method for the prediction of displacement of step-like landslides, by accounting the controlling factors, using recently proposed extreme learning adaptive neuro-fuzzy inference system (ELANFIS) with empirical mode decomposition (EMD) technique. ELANFIS reduces the computational complexity of conventional ANFIS by incorporating the theoretical idea of extreme learning machines (ELM). The rainfall data and reservoir level elevation data are also integrated into the study. The nonlinear original landslide displacement series, rainfall data, and reservoir level elevation data are first converted into a limited number of intrinsic mode functions (IMF) and one residue. Then decomposed displacement data are predicted by using appropriate ELANFIS model. Final prediction is obtained by the summation of outputs of all ELANFIS sub models. The performance of proposed the technique is tested for the prediction Baishuihe and Shiliushubao landslides. The results show that ELANFIS with EMD model outperforms other methods in terms of generalization performance. 相似文献
We present a Fortran library which can be used to solve large-scale dense linear systems, Ax=b. The library is based on the LU decomposition included in the parallel linear algebra library PLAPACK and on its out-of-core extension POOCLAPACK. The library is complemented with a code which calculates the self-polarization charges and self-energy potential of axially symmetric nanostructures, following an induced charge computation method. Illustrative calculations are provided for hybrid semiconductor–quasi-metal zero-dimensional nanostructures. In these systems, the numerical integration of the self-polarization equations requires using a very fine mesh. This translates into very large and dense linear systems, which we solve for ranks up to 3×105. It is shown that the self-energy potential on the semiconductor–metal interface has important effects on the electronic wavefunction.
Program summary
Program title: HDSS (Huge Dense System Solver)Catalogue identifier: AEHU_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHU_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 98 889No. of bytes in distributed program, including test data, etc.: 1 009 622Distribution format: tar.gzProgramming language: Fortran 90, CComputer: Parallel architectures: multiprocessors, computer clustersOperating system: Linux/UnixHas the code been vectorized or parallelized?: Yes. 4 processors used in the sample tests; tested from 1 to 288 processorsRAM: 2 GB for the sample tests; tested for up to 80 GBClassification: 7.3External routines: MPI, BLAS, PLAPACK, POOCLAPACK. PLAPACK and POOCLAPACK are included in the distribution file.Nature of problem: Huge scale dense systems of linear equations, Ax=B, beyond standard LAPACK capabilities. Application to calculations of self-energy potential in dielectrically mismatched semiconductor quantum dots.Solution method: The linear systems are solved by means of parallelized routines based on the LU factorization, using efficient secondary storage algorithms when the available main memory is insufficient. The self-energy solver relies on an induced charge computation method. The differential equation is discretized to yield linear systems of equations, which we then solve by calling the HDSS library.Restrictions: Simple precision. For the self-energy solver, axially symmetric systems must be considered.Running time: About 32 minutes to solve a system with approximately 100 000 equations and more than 6000 right-hand side vectors using a four-node commodity cluster with a total of 32 Intel cores. 相似文献
The implementation of product development process management (PDPM) is an effective means of developing products with higher quality in shorter lead time. It is argued in this paper that product, data, person and activity are basic factors in PDPM With detailed analysis of these basic factors and their relations in product developmed process, all product development activities are considered as tasks and the management of product development process is regarded as the management of task execution A task decomposition based product development model is proposed with methods of constructing task relation matrix from layer model and constraint model resulted from task decomposition. An algorithm for constructing directed task graph is given and is used in the management of tasks. Finally, the usage and limitation of the proposed PDPM model is given with further work proposed. 相似文献
Many important science and engineering applications, such as regulating the temperature distribution over a semiconductor wafer and controlling the noise from a photocopy machine, require interpreting distributed data and designing decentralized controllers for spatially distributed systems. Developing effective computational techniques for representing and reasoning about these systems, which are usually modeled with partial differential equations (PDEs), is one of the major challenge problems for qualitative and spatial reasoning research.
This paper introduces a novel approach to decentralized control design, influence-based model decomposition, and applies it in the context of thermal regulation. Influence-based model decomposition uses a decentralized model, called an influence graph, as a key data abstraction representing influences of controls on distributed physical fields. It serves as the basis for novel algorithms for control placement and parameter design for distributed systems with large numbers of coupled variables. These algorithms exploit physical knowledge of locality, linear superposability, and continuity, encapsulated in influence graphs representing dependencies of field nodes on control nodes. The control placement design algorithms utilize influence graphs to decompose a problem domain so as to decouple the resulting regions. The decentralized control parameter optimization algorithms utilize influence graphs to efficiently evaluate thermal fields and to explicitly trade off computation, communication, and control quality. By leveraging the physical knowledge encapsulated in influence graphs, these control design algorithms are more efficient than standard techniques, and produce designs explainable in terms of problem structures. 相似文献
Necessary and sufficient conditions for the matrix equation X+ATX?2A=I to have a real symmetric positive definite solution X are derived. Based on these conditions, some properties of the matrix A as well as relations between the solution X and A are derived. 相似文献
The drawbacks of SVD-based image watermarking are false positive, robust and transparency. The former can be overcome by embedding the principal components of the watermark into the host image, the latter is dependent on how much the quantity (i.e., scaling factor) of the principal components is embedded. For the existing methods, the scaling factor is a fixed value; actually, it is image-dependent. Different watermarks need the different scaling factors, although they are embedded in the same host image. In this paper, two methods are proposed to improve the reliability and robustness. To improve the reliability, for the first method, the principal components of the watermark are embedded into the host image in discrete cosine transform (DCT); and for the second method, those are embedded into the host image in discrete wavelets transform (DWT). To improve the robustness, the particle swarm optimization (PSO) is used for finding the suitable scaling factors. The experimental results demonstrate that the performance of the proposed methods outperforms than those of the existing methods. 相似文献