首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Non-isothermal two-phase flow in low-permeable porous media   总被引:5,自引:1,他引:4  
In this paper, we consider non-isothermal two-phase flow of two components (air and water) in gaseous and liquid phases in extremely low-permeable porous media through the use of the finite element method (FEM). Interphase mass transfer of the components between any of the phases is evaluated by assuming local thermodynamic equilibrium between the phases. Heat transfer occurs by conduction and multiphase advection. General equations of state for phase changes (Clausius–Clapeyron and Henry law) as well as multiphase properties for the low-permeable bentonites are implemented in the code. Additionally we consider the impact of swelling/shrinking processes on porosity and permeability changes. The numerical model is implemented in the context of the simulator RockFlow/RockMech (RF/RM), which is based on object-oriented programming techniques. The finite element formulations are written in terms of dimensionless quantities. This has proved to be advantageous for preconditioning composite system matrices of coupled multi-field problems. Three application examples are presented. The first one examines differences between the Richards approximation and the multicomponent/multiphase approach, and between two numerical coupling schemes. The second example serves as partial verification against experimental results and to demonstrate coherence between different element types. The last example shows simultaneous desaturation and resaturation in one system.The work presented in this paper was partially funded by the Federal Institute of Geosciences and Natural Resources (BGR). We thank Dr. Wallner and Dr. Shao from the BGR for their support of this research. Furthermore, we wish to acknowledge the Deutsche Forschungsgemeinschaft (DFG) for their interest and support of this work.  相似文献   

2.
This paper reviews the main achievements of the Targeted Project Special Materials for Advanced Technology (TP SMAT [1989–1993]).It describes the analysis of funding provided by TP SMAT and the most important results of TP SMAT with reference to both scientific publications issued (together with the relative impact factor) and technology demonstrators produced. Human skills developed and new facilities are also taken into account.It also analyzes a new way of managing Targeted Project based upon cross-functional process, where timing and mode of development are merged in a single goal: an innovative electrical car (Zero Impact Car—ZIC). In this process the various topics were combined, with typical concurrent engineering tools, in a complete harmonious way in order to produce a demonstrator of new materials, technologies and integrated design (i.e. design of both structures and microstructures-materials). This new managing model is the so called: ZIC paradigm.Chairman of the CNR National Committee of Experts for Chemical SciencesChairman of the Commission for the feasibility study of the Targeted Project on Special Materials for Advanced Technology IIDirector of the Targeted Project on Special Materials for Advanced Technology IManagement staff for the Targeted Project on Special Materials for Advanced Technology I  相似文献   

3.
Effective Homology is an algebraic-topological method based on the computational concept of chain homotopy equivalence on a cell complex. Using this algebraic data structure, Effective Homology gives answers to some important computability problems in Algebraic Topology. In a discrete context, Effective Homology can be seen as a combinatorial layer given by a forest graph structure spanning every cell of the complex. In this paper, by taking as input a pixel-based 2D binary object, we present a logarithmic-time uniform solution for describing a chain homotopy operator $\phi $ for its adjacency graph. This solution is based on Membrane Computing techniques applied to the spanning forest problem and it can be easily extended to higher dimensions.  相似文献   

4.
We discuss a number of topics in phonon physics that are relevant to the design of low temperature phonon-based dark matter detectors. These topics include the generation of primæval phonons by the nuclear recoil, the processes by which these phonons decay into lower energy phonons, and the ballistic propagation of these excitations to the sensors on the crystal surface.  相似文献   

5.
A knowledge organization system (KOS) can help easily indicate the deep knowledge structure of a patent document set. Compared to classification code systems, a personalized KOS made up of topics can represent the technology information in a more agile, detailed manner. This paper presents an approach to automatically construct a KOS of patent documents based on term clumping, Latent Dirichlet Allocation (LDA) model, K-Means clustering and Principal Components Analysis (PCA). Term clumping is adopted to generate a better bag-of-words for topic modeling and LDA model is applied to generate raw topics. Then by iteratively using K-Means clustering and PCA on the document set and topics matrix, we generated new upper topics and computed the relationships between topics to construct a KOS. Finally, documents are mapped to the KOS. The nodes of the KOS are topics which are represented by terms and their weights and the leaves are patent documents. We evaluated the approach with a set of Large Aperture Optical Elements (LAOE) patent documents as an empirical study and constructed the LAOE KOS. The method used discovered the deep semantic relationships between the topics and helped better describe the technology themes of LAOE. Based on the KOS, two types of applications were implemented: the automatic classification of patents documents and the categorical refinements above search results.  相似文献   

6.
7.
A minimum distance decoding algorithm for non-binary first order Reed-Muller codes is described. Suggested decoding is based on a generalization of the fast Hadamard transform to the non-binary case. We also propose a fast decoding algorithm for non-binary first order Reed-Muller codes with complexity proportional to the length of the code. This algorithm provides decoding within the limits guaranteed by the minimum distance of the code.Partly supported by the Guastallo Fellowship. This work was presented in part at the 9th International Symposium Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, New Orleans, USA, October 1991  相似文献   

8.
9.
We study the problem of the computation of the square-free decomposition for polynomials over fields of positive characteristic. For fields which are explicitly finitely generated over perfect fields, we show how the classical algorithm for characteristic zero can be generalized using multiple derivations. For more general fields of positive characteristic one must make an additional constructive hypothesis in order for the problem to be decidable. We show that Seidenberg'sCondition P gives a necessary and sufficient condition on the fieldK for computing a complete square free decomposition of polynomials with coefficients in any finite algebraic extension ofK.This research was partially supported by C.N.R., M.U.R.S.T, CEC contract ES PRIT B.R.A.n.6846 POSSO and EC Science Plan Project Computational Methods in the Theory of Riemann Surfaces and Algebraic Curves  相似文献   

10.
Algorithms for solving uniform decision problems for algebraic structures crucially depend on the chosen finite presentations for the structures under consideration. Rewriting techniques have been used very successfully to solve uniform decision problems, when the presentations considered involve finite, noetherian, and ()-confluent rewriting systems. Whenever the class of algebraic structures considered is closed under the operation of taking finitely generated substructures, then the algorithms for solving the uniform decision problems can be applied to the substructures as well. However, since these algorithms depend on the form of the presentations, this involves the task of constructing a presentation of a certain form for a substructure given a presentation of this form for the structure itself and a finite set of generating elements for the substructure.This problem, which has received a lot of attention in algebra, is here investigated from an algorithmic point of view. The structures considered are the following two classes of groups, which have been studied extensively before: the polycyclic groups and the context-free groups.Finitely generated context-free groups can be presented by finite, monadic, and -confluent string-rewriting systems. Due to their nice algorithmic properties these systems provide a way to effectively solve many decision problems for context-free groups. Since finitely generated subgroups of context-free groups are again contextfree, they can be presented in the same way. Here we describe a process that, from a finite, monadic, and -confluent string-rewriting system presenting a context-free groupG and a finite subsetU ofG, determines a presentation of this form for the subgroup U ofG that is generated byU. For finitely presented polycyclic groups we obtain an analogous result, when we use finite confluent PCP2-presentations to describe these groups.This work was performed while this author was visiting at the Fachbereich Informatik, Universität Kaiserslautern, during his sabbatical 1991/92  相似文献   

11.
This paper discusses the possibility to represent scientific development by second-order networks in different modalities. In particular, a specific modality structured by subfield-to-subfield relations is presented. By constructing such co-subfield maps for successive periods of time, we were able to describe the changing subfield relations within the field of chemical engineering. In this way, dynamical processes in the development of a field as a whole can be revealed. Advantages and disavantages as compared to co-citation and co-word mapping techniques are discussed and the importance of developing combined techniques is stressed.  相似文献   

12.
This paper describes the development of a novel automatic FEM mesh generation algorithm based on the fuzzy knowledge processing technique.A number of local nodal patterns are stored in a nodal pattern database of the mesh generation system. These nodal patterns are determined a priori based on certain theories or past experience of experts of FEM analyses. For example, such human experts can determine certain nodal patterns suitable for stress concentration analyses of cracks, corners, holes and so on. Each nodal pattern possesses a membership function and a procedure of node placement according to this function. In the cases of the nodal patterns for stress concentration regions, the membership function which is utilized in the fuzzy knowledge processing has two meanings, i.e. the closeness of nodal location to each stress concentration field as well as nodal density. This is attributed to the fact that a denser nodal pattern is required near a stress concentration field. What a user has to do in a practical mesh generation process are to choose several local nodal patterns properly and to designate the maximum nodal density of each pattern. After those simple operations by the user, the system places the chosen nodal patterns automatically in an analysis domain and on its boundary, and connects them smoothly by the fuzzy knowledge processing technique. Then triangular or tetrahedral elements are generated by means of the advancing front method. The key issue of the present algorithm is an easy control of complex two- or three-dimensional nodal density distribution by means of the fuzzy knowledge processing technique.To demonstrate fundamental performances of the present algorithm, a prototype system was constructed with one of object-oriented languages, Smalltalk-80 on a 32-bit microcomputer, Macintosh II. The mesh generation of several two- and three-dimensional domains with cracks, holes and junctions was presented as examples.  相似文献   

13.
Detecting evolution-based anomalies have emerged as an effective research topic in many domains, such as social and information networks, bioinformatics, and diverse security applications. However, the majority of research has focused on detecting anomalies using evolutionary behavior among objects in a network. The real-world networks are omnipresent, and heterogeneous in nature, while, in these networks, multiple types of objects co-evolve together with their attributes. To understand the anomalous co-evolution of multi-typed objects in a heterogeneous information network (HIN), we need an effective technique that can capture abnormal co-evolution of multi-typed objects. For example, detecting co-evolution-based anomalies in the heterogeneous bibliographic information network (HBIN) can depict better the object-oriented semantics than just scrutinizing the co-author or citation network alone. In this paper, we introduce the novel notion of a co-evolutionary anomaly in the HBIN, detect anomalies using co-evolution pattern mining (CPM), and study how multi-typed objects influence each other in their anomalous declaration by following a special type of HIN called star networks. The influence of three pre-defined attributes namely paper-count, co-author, and venue over target objects is measured to detect co-evolutionary anomalies in HBIN. The anomaly scores are calculated for each 510 target objects and individual influence of attributes is measured for two top target objects in case-studies. It is observed that venue has the most influence on the target objects discussed as case studies, however, about the rest of anomalies in the list, the most anomalous influential attribute could be rather different than the venue. Indeed, the CABIN algorithm constructs the way to find out the most influential attributes in co-evolutionary anomaly detection. Experiments on bibliographic dataset validate the effectiveness of the model and dominance of the algorithm. The proposed technique can be applied on various HINs such as Facebook, Twitter, Delicious to detect co-evolutionary anomalies.  相似文献   

14.
Literature retrieval based on citation context   总被引:2,自引:0,他引:2  
While the citation context of a reference may provide detailed and direct information about the nature of a citation, few studies have specifically addressed the role of this information in retrieving relevant documents from the literature primarily due to the lack of full text databases. In this paper, we design a retrieval system based on full texts in the PubMed Central database. We constructed two modules in the retrieval system. One is a reference retrieval module based on citation contexts. Another is a citation context retrieval module for searching the citation contexts of a specific paper. The results of comparisons show that the reference retrieval module performed better than Google Scholar and PubMed database in terms of finding proper references based on topic words extracted from citation context. It also performed very well on searching highly cited papers and classic papers. The citation context retrieval module visualizes the topics of citation contexts as tag clouds and classifies citation contexts based on cue words in citation contexts.  相似文献   

15.
Summary In this paper the kinematics of a weak shock front governed by a hyperbolic system of conservation laws is studied. This is used to develop a method for solving problems, involving the propagation of nonlinear unimodal waves. It consists of first solving the nonlinear wave problem by moving along the bicharacteristics of the system and then fitting the shock into this solution field, so that it satisfies the necessary jump conditions. The kinematics of the shock leads in a natural way to the definition of shock-rays, which play the same role as the rays in a continuous flow. A special case of a circular cylinder introduced suddenly in a constant streaming flow is studied in detail. The shock fitted in the upstream region propagates with a velocity which is the mean of the velocities of the linear and the nonlinear wave fronts. In the downstream the solution is given by an expansion wave.With 7 Figures  相似文献   

16.
The ever-evolving nature of research works creates the cacophony of new topics incessantly resulting in an unstable state in every field of research. Researchers are disseminating their works producing a huge volume of articles. In fact, the spectacular growth in scholarly literature is widening the choice sets overwhelmingly for researchers. Consequently, they face difficulties in identifying a suitable topic of current importance from a plethora of research topics. This remains an ill-defined problem for researchers due to the overload of choices. The problem is even more severe for new researchers due to the lack of experience. Hence, there is a definite need for a system that would help researchers make decisions on appropriate topics. Recommender systems are good options for performing this very task. They have been proven to be useful for researchers to keep pace with research dynamics and at the same time to overcome the information overload problem by retrieving useful information from the large information space of scholarly literature. In this article, we present RTRS, a knowledge-based Research Topics Recommender System to assist both novice and experienced researchers in selecting research topics in their chosen field. The core of this system hinges upon bibliometric information of the literature. The system identifies active research topics in a particular area and recommends top N topics to the target users. The results obtained have proven useful to academic researchers, particularly novices, in making an early decision on research topics.  相似文献   

17.
The Operations Research EXperiment Framework for Java (OREX-J) is an object-oriented software framework that helps users to design, implement and conduct computational experiments for the analysis of optimization algorithms. As it was designed in a generic way using object-oriented programming and design patterns, it is not limited to a specific class of optimization problems and algorithms. The purpose of the framework is to reduce the amount of manual labor required for conducting and evaluating computational experiments: OREX-J provides a generic, extensible data model for storing detailed data on an experimental design and its results. Those data can include algorithm parameters, test instance generator settings, the instances themselves, run-times, algorithm logs, solution properties, etc. All data are automatically saved in a relational database (MySQL, http://www.mysql.com/) by means of the object-relational mapping library Hibernate (http://www.hibernate.org/). This simplifies the task of analyzing computational results, as even complex analyses can be performed using comparatively simple Structured Query Language (SQL) queries. Also, OREX-J simplifies the comparison of algorithms developed by different researchers: Instead of integrating other researchers’ algorithms into proprietary test beds, researchers could use OREX-J as a common experiment framework. This paper describes the architecture and features of OREX-J and exemplifies its usage in a case study. OREX-J has already been used for experiments in three different areas: Algorithms and reformulations for mixed-integer programming models for dynamic lot-sizing with substitutions, a simulation-based optimization approach for a stochastic multi-location inventory control model, and an optimization model for software supplier selection and product portfolio planning.  相似文献   

18.
In this paper we report the results of Quantum Monte Carlo (QMC) simulations of a tight binding, correlated electron model consisting of two 2D Hubbard sheets hybridized by an intersheet hopping t. As t increases, a ground state with long range antiferromagnetic order gives way to a spin liquid phase.  相似文献   

19.
In this paper, we develop an analytical model of an order sortation system used in automated distribution centers. In such systems, groups of orders are delivered to a recirculating conveyor system where they are sorted into shipping lanes for final preparation and loaded onto waiting trucks. We develop a model of the sorting process, which incorporates the stochastic elements of these systems, to determine the relative merits of two common categories of sorting strategies found in industry: fixed priority schemes and the next available rule. Fixed priority schemes include such popular rules as sort the largest (or smallest) orders first. We show that in systems with little lane blocking, a rule which assigns the next available order to a shipping lane will outperform any fixed priority scheme in terms of sorting time and system throughput while in systems with significant lane blocking, the sorting rule has little impact.  相似文献   

20.
《IEEE sensors journal》2009,9(3):297-305
In this paper, we describe the system architecture and prototype measurements of a MEMS gyroscope system with a resolution of 0.025 $^{circ}$/s/ $sqrt{{rm Hz}}$. The architecture makes extensive use of control loops, which are mostly in the digital domain. For the primary mode both the amplitude and the resonance frequency are tracked and controlled. The secondary mode readout is based on unconstrained $SigmaDelta$ force-feedback, which does not require a compensation filter in the loop and thus allows more beneficial quantization noise shaping than prior designs of the same order. Due to the force-feedback, the gyroscope has ample dynamic range to correct the quadrature error in the digital domain. The largely digital setup also gives a lot of flexibility in characterization and testing, where system identification techniques have been used to characterize the sensors. This way, a parasitic direct electrical coupling between actuation and readout of the mass-spring systems was estimated and corrected in the digital domain. Special care is also given to the capacitive readout circuit, which operates in continuous time.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号