This research aims to illustrate the potential use of concepts, techniques, and mining process tools to improve the systematic review process. Thus, a review was performed on two online databases (Scopus and ISI Web of Science) from 2012 to 2019. A total of 9649 studies were identified, which were analyzed using probabilistic topic modeling procedures within a machine learning approach. The Latent Dirichlet Allocation method, chosen for modeling, required the following stages: 1) data cleansing, and 2) data modeling into topics for coherence and perplexity analysis. All research was conducted according to the standards of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses in a fully computerized way. The computational literature review is an integral part of a broader literature review process. The results presented met three criteria: (1) literature review for a research area, (2) analysis and classification of journals, and (3) analysis and classification of academic and individual research teams. The contribution of the article is to demonstrate how the publication network is formed in this particular field of research, and how the content of abstracts can be automatically analyzed to provide a set of research topics for quick understanding and application in future projects.
The computer industry has evolved very rapidly from single-user computers to computer networks where users are able to share both local and remote files. Networks of microcomputers facilitate the integration of all information processing for distributed applications such as database processing and electronic mail. One management application of promising potential for computer networks is distributed simulation. Simulation analysis can be useful to essentially all problem-solving and decision-making on the job.
To implement a particular distributed application, computer communication between processors must be considered. Unlike expensive multiprocessor computers, networks of less-expensive microcomputers do not have pre-established communication paths between processors. This paper addresses how this obstacle may be overcome by using communication protocols based on the Open Systems Interconnection (OSI) reference model. Protocol services needed to support a distributed simulation environment will be identified, and their implementation through a prototype will then be investigated and evaluated. 相似文献
Aiming to fill a gap in traditional methods of source code documentation, which focus mainly on the API (application programming interface) documentation for other programmers, this article presents a new approach for business requirements, mapping them through a set of annotations. These annotations, in turn, are interpreted by the GaiaDoc tool, which is specified in this paper and is able to generate documentation in form of use case specifications in a language and format easily understandable by the project stakeholders. Along with the GaiaDoc tool proposal, a RUP (rational unified process) based requirements flow is developed to fit with its needs and it is validated through CMMI (capability maturity model integration) requirement process areas and a case study of the proposed methodology's application is presented before the final considerations. 相似文献
Self-adaptive software is capable of evaluating and changing its own behavior, whenever the evaluation shows that the software is not accomplishing what it was intended to do, or when better functionality or performance may be possible. The topic of system adaptivity has been widely studied since the mid-60s and, over the past decade, several application areas and technologies relating to self-adaptivity have assumed greater importance. In all these initiatives, software has become the common element that introduces self-adaptability. Thus, the investigation of systematic software engineering approaches is necessary, in order to develop self-adaptive systems that may ideally be applied across multiple domains. The main goal of this study is to review recent progress on self-adaptivity from the standpoint of computer sciences and cybernetics, based on the analysis of state-of-the-art approaches reported in the literature. This review provides an over-arching, integrated view of computer science and software engineering foundations. Moreover, various methods and techniques currently applied in the design of self-adaptive systems are analyzed, as well as some European research initiatives and projects. Finally, the main bottlenecks for the effective application of self-adaptive technology, as well as a set of key research issues on this topic, are precisely identified, in order to overcome current constraints on the effective application of self-adaptivity in its emerging areas of application. 相似文献
Tool wear detection is a key issue for tool condition monitoring. The maximization of useful tool life is frequently related with the optimization of machining processes. This paper presents two model-based approaches for tool wear monitoring on the basis of neuro-fuzzy techniques. The use of a neuro-fuzzy hybridization to design a tool wear monitoring system is aiming at exploiting the synergy of neural networks and fuzzy logic, by combining human reasoning with learning and connectionist structure. The turning process that is a well-known machining process is selected for this case study. A four-input (i.e., time, cutting forces, vibrations and acoustic emissions signals) single-output (tool wear rate) model is designed and implemented on the basis of three neuro-fuzzy approaches (inductive, transductive and evolving neuro-fuzzy systems). The tool wear model is then used for monitoring the turning process. The comparative study demonstrates that the transductive neuro-fuzzy model provides better error-based performance indices for detecting tool wear than the inductive neuro-fuzzy model and than the evolving neuro-fuzzy model. 相似文献
This short communication analyzes the results recently presented in the paper “On a novel dead time compensator for stable processes with long dead times” published in the Journal of Process Control. In the mentioned paper it is argued that the proposed strategy, called modified Smith predictor (MSP), gives better performance than the filtered Smith predictor (FSP) dead-time compensator for stable processes with dead time. In fact MSP has the same structure as FSP and only some specific tuning rules of the filters are proposed. Therefore, in this work some aspects of the comparative analysis and tuning rules presented in the referred paper are discussed to show that MSP is a particular case of FSP and that for some particular cases its tuning rule does not allow for a good closed-loop response. Some simulation case studies are used to illustrate these aspects. 相似文献
We present in this paper an analysis of a semi-Lagrangian second order Backward Difference Formula combined with hp-finite
element method to calculate the numerical solution of convection diffusion equations in ℝ2. Using mesh dependent norms, we prove that the a priori error estimate has two components: one corresponds to the approximation
of the exact solution along the characteristic curves, which is
O(Dt2+hm+1(1+\frac\mathopen|logh|Dt))O(\Delta t^{2}+h^{m+1}(1+\frac{\mathopen{|}\log h|}{\Delta t})); and the second, which is O(Dtp+|| [(u)\vec]-[(u)\vec]h||L¥)O(\Delta t^{p}+\| \vec{u}-\vec{u}_{h}\|_{L^{\infty}}), represents the error committed in the calculation of the characteristic curves. Here, m is the degree of the polynomials in the finite element space, [(u)\vec]\vec{u} is the velocity vector, [(u)\vec]h\vec{u}_{h} is the finite element approximation of [(u)\vec]\vec{u} and p denotes the order of the method employed to calculate the characteristics curves. Numerical examples support the validity
of our estimates. 相似文献
The paper describes a parallel implementation of a neural algorithm performing vector quantization for very low bit-rate video compression on toroidal-mesh multiprocessor systems. The neural model considered is a plastic version of the Neural Gas algorithm, whose features are suitable for implementations on toroidal mesh topologies. The architecture adopted, and the data-allocation strategy, enhance the method's scaling properties and remarkable efficiency. The parallel approach is supported by a theoretical analysis of the efficiency of the overall structure. Experimental results on a significant testbed and the fit between predicted and measured values confirm the validity of the parallel approach. 相似文献
The purpose of this paper is to show how the results of an optimisation model that can be integrated with the decisions made within a simulation model to schedule back-end operations in a semiconductor assembly and test facility. The problem is defined by a set of resources that includes machines and tooling, process plans for each product and the following four hierarchical objectives: minimise the weighted sum of key device shortages, maximise weighted throughput, minimise the number of machines used and minimise the makespan for a given set of lots in queue. A mixed integer programming model is purposed and first solved with a greedy randomised adaptive search procedure (GRASP). The results associated with the prescribed facility configuration are then fed to the simulation model written in AutoSched AP. However, due to the inadequacy of the options built into AutoSched, three new rules were created: the first two are designed to capture the machine set-up profiles provided by the GRASP and the third to prioritise the processing of hot lots containing key devices. The computational analysis showed that incorporating the set-up from the GRASP in dynamic operations of the simulation greatly improved its performance with respect to the four objectives. 相似文献