首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Boundary value problems in two or more variables characterized by partial differential equations can be solved by a direct use of multidimensional Laplace transform. The general theory for obtaining solutions in this technique is developed in this paper by providing theorems on Laplace transform in n dimensions. Examples are presented for each theorem. Once the basic theorems are established it is possible to derive many useful transform pairs in n variables. Use of the above technique is illustrated by solution of an electrostatic potential problem.  相似文献   

2.
Algorithms for solving linear PDEs implemented in modern computer algebra systems are usually limited to equations with two independent variables. In this paper, we propose a generalization of the theory of Laplace transformations to second-order partial differential operators in ?3 (and, generally, ? n ) with a principal symbol decomposable into the product of two linear (with respect to derivatives) factors. We consider two algorithms of generalized Laplace transformations and describe classes of operators in ?3 to which these algorithms are applicable. We correct a mistake in [8] and show that Dini-type transformations are in fact generalized Laplace transformations for operators with coefficients in a skew (noncommutative) Ore field. Keywords: computer algebra, partial differential equations, algorithms for solution.  相似文献   

3.
The general transient linear elastodynamic problem under conditions of plane stress or plane strain is numerically solved by a special finite element method combined with numerical Laplace transform. A rectangular finite element with eight degrees of freedom is constructed on the basis of the governing equations of motion in the Laplace transformed domain. Thus the problem is formulated and numerically solved in the transformed domain and the time domain response is obtained by a numerical inversion of the transformed solution. Viscoelastic material behavior is easily taken into account by invoking the correspondence principle. The method appears to have certain advantages over conventional finite element techniques.  相似文献   

4.
5.
A recursive algorithm is developed for solving the inverse Laplace transform, linear and nonlinear state equations using block-pulse functions. The relationships between the solution of the continuous-time state equation using block-pulse functions and that of the equivalent discrete-time state equation using trapezoidal rule are investigated. A complete computer program is presented for solving the differential equations of linear and nonlinear state equations using block-pulse functions.  相似文献   

6.
The time dependence of temperatures as solutions of transient heat conduction problems, may be obtained by different numerical techniques. Three procedures are presented. The step-by-step methods, based (i) on finite-elements and (ii) finite-differences in time are briefly reviewed, (iii) The application of the numerical Laplace transform is extensively discussed and its introduction in a finite element program is presented. The accuracy and convergence of the numerical results are discussed and a practical engineering problem is solved for which the computer expenses are compared.  相似文献   

7.
A boundary element method (BEM) for the two-dimensional analysis of structures with stationary cracks subjected to dynamic loads is presented. The difficulties in modelling the structures with cracks by BEM are solved by using two different equations for coincident points on the crack surfaces. The equations are the displacement and the traction boundary integral equations. This method of analysis requires discretization of the boundary and the crack surfaces only. The time-dependent solutions are obtained by the Laplace transform method, which is used to solve several examples. The influence of the number of boundary elements and the number of Laplace parameters is investigated and a comparison with other reported solutions is shown.  相似文献   

8.

In this paper, a different cryptographic method is introduced by using a Power series transform. A new algorithm for cryptography is produced. The extended Laplace transform of the exponential function is used to encode an explicit text. The key is generated by applying the modular arithmetic rules to the coefficients obtained in the transformation. Here, ASCII codes used to hide the mathematically generated keys to strengthen the encryption. Text steganography is used to make it difficult to break the password. The made encryption is reinforced by image steganography. To hide the presence of the cipher text, it is embedded in another open text with a stenography method. Later, this text is buried in an image. For decryption, it is seen that the inverse of the Power series transform can be used for decryption easily. Experimental results are obtained by making a simulation of the proposed method. As a result, it is stated that the proposed method can be used in crypto machines.

  相似文献   

9.
10.
提出了一种新的求解双曲守恒律方程(组)的四阶半离散中心迎风差分方法.空间导数项的离散采用四阶CWENO(central weighted essentially non—oscillatory)的构造方法,使所得到的新方法在提高精度的同时,具有更高的分辨率.使用该方法产生的数值粘性要比交错的中心格式小,而且由于数值粘性与时间步长无关,从而时间步长可根据稳定性需要尽可能的小.  相似文献   

11.
Traditional term weighting schemes in text categorization, such as TF-IDF, only exploit the statistical information of terms in documents. Instead, in this paper, we propose a novel term weighting scheme by exploiting the semantics of categories and indexing terms. Specifically, the semantics of categories are represented by senses of terms appearing in the category labels as well as the interpretation of them by WordNet. Also, the weight of a term is correlated to its semantic similarity with a category. Experimental results on three commonly used data sets show that the proposed approach outperforms TF-IDF in the cases that the amount of training data is small or the content of documents is focused on well-defined categories. In addition, the proposed approach compares favorably with two previous studies.  相似文献   

12.
Neural Computing and Applications - Term weighting is a well-known preprocessing step in text classification that assigns appropriate weights to each term in all documents to enhance the...  相似文献   

13.
Term weighting is a strategy that assigns weights to terms to improve the performance of sentiment analysis and other text mining tasks. In this paper, we propose a supervised term weighting scheme based on two basic factors: Importance of a term in a document (ITD) and importance of a term for expressing sentiment (ITS), to improve the performance of analysis. For ITD, we explore three definitions based on term frequency. Then, seven statistical functions are employed to learn the ITS of each term from training documents with category labels. Compared with the previous unsupervised term weighting schemes originated from information retrieval, our scheme can make full use of the available labeling information to assign appropriate weights to terms. We have experimentally evaluated the proposed method against the state-of-the-art method. The experimental results show that our method outperforms the method and produce the best accuracy on two of three data sets.  相似文献   

14.
求解非线性方程组的拟牛顿-粒子群混合算法   总被引:3,自引:2,他引:3       下载免费PDF全文
结合粒子群算法和拟牛顿法的优点,提出了一种用于求解非线性方程组的混合算法。该混合算法充分发挥了粒子群算法的群体搜索性和拟牛顿法的局部细致搜索性,同时也克服了粒子群算法后期搜索效率降低和拟牛顿法对初始点敏感的缺陷。数值实验表明所设计的混合算法有极好的稳定性和较高的收敛速度和精度。  相似文献   

15.
Physical data layout is a crucial factor in the performance of queries and updates in large data warehouses. Data layout enhances and complements other performance features such as materialized views and dynamic caching of aggregated results. Prior work has identified that the multidimensional nature of large data warehouses imposes natural restrictions on the query workload. A method based on a “uniform” query class approach has been proposed for data clustering and shown to be optimal. However, we believe that realistic query workloads will exhibit data access skew. For instance, if time is a dimension in the data model, then more queries are likely to focus on the most recent time interval. The query class approach does not adequately model the possibility of multidimensional data access skew. We propose the affinity graph model for capturing workload characteristics in the presence of access skew and describe an efficient algorithm for physical data layout. Our proposed algorithm considers declustering and load balancing issues which are inherent to the multidisk data layout problem. We demonstrate the validity of this approach experimentally.  相似文献   

16.
We propose an enhanced concurrency control algorithm that maximizes the concurrency of multidimensional index structures. The factors that deteriorate the concurrency of index structures are node splits and minimum bounding region (MBR) updates in multidimensional index structures. The properties of our concurrency control algorithm are as follows: First, to increase the concurrency by avoiding lock coupling during MBR updates, we propose the PLC (partial lock coupling) technique. Second, a new MBR update method is proposed. It allows searchers to access nodes where MBR updates are being performed. Finally, our algorithm holds exclusive latches not during whole split time but only during physical node split time that occupies the small part of a whole split process. For performance evaluation, we implement the proposed concurrency control algorithm and one of the existing link technique-based algorithms on MIDAS-III that is a storage system of a BADA-IV DBMS. We show through various experiments that our proposed algorithm outperforms the existing algorithm in terms of throughput and response time. Also, we propose a recovery protocol for our proposed concurrency control algorithm. The recovery protocol is designed to assure high concurrency and fast recovery.  相似文献   

17.
A clustering procedure called HICAP (HIstogram Cluster Analysis Procedure) was developed to perform an unsupervised classification of multidimensional image data. The clustering approach used in HICAP is based upon an algorithm described by Narendra and Goldberg to classify four-dimensional Landsat Multispectral Scanner data. HICAP incorporates two major modifications to the scheme by Narendra and Goldberg. The first modification is that HICAP is generalized to process up to 32-bit data with an arbitrary number of dimensions. The second modification is that HICAP uses more efficient algorithms to implement the clustering approach described by Narendra and Goldberg.(1) This means that the HICAP classification requires less computation, although it is otherwise identical to the original classification. The computational savings afforded by HICAP increases with the number of dimensions in the data.  相似文献   

18.
Multimedia Tools and Applications - A novel class-dependent joint weighting method is proposed to mine the key skeletal joints for human action recognition. Existing deep learning methods or those...  相似文献   

19.
《Computers & Fluids》2002,31(4-7):639-661
Dissipative compact schemes are constructed for multidimensional hyperbolic problems. High-order accuracy is not obtained for each space derivative, but for the whole residual, which avoids any linear algebra. Numerical dissipation is also residual based, i.e. constructed from derivatives of the residual only, which provides simplicity and robustness. High accuracy and efficiency are checked on 2-D and 3-D model problems. Various applications to the compressible Euler equations without and with shock waves are presented.  相似文献   

20.
A contour-based scheme for near lossless shape coding is proposed aiming to acquire high coding efficiency. For a given shape image, object contours are firstly extracted and then thinned to be perfect single-pixel width. Next they are transformed into chain-based representation and divided into different chain segments based on link directions. Thirdly, two fundamental coding modes are designed and developed to encode different types of chain segments, where the spatial correlations within object contours are analyzed and exploited to improve the coding efficiency as high as possible. Finally, a fast and efficient mode selection method is introduced to select the one that can produce shorter code length out of the two modes for each chain segment. Experiments are conducted and the results show that the proposed scheme is considerably more efficient than the existing techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号