首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

In literature, it has been reported that the convergence of some preconditioned stationary iterative methods using certain type upper triangular matrices as preconditioners are faster than the basic iterative methods. In this paper, a new preconditioned iterative method for the numerical solution of linear systems has been introduced, and the convergence analysis of the proposed method and an existing one have been done. Some numerical examples have also been given, which show the effectiveness of both of the methods.  相似文献   

2.
ABSTRACT

This paper addresses the problem of 2D sound source localization using multiple microphone arrays in an outdoor environment. Two main issues exist in such localization. Since the localization performance depends on a variety of parameters, the lack of knowledge about how to design the system is one of those issues. A thorough analysis in respect to the accuracy of the localization results with different simulation conditions has been performed. Obtained characteristics lead to a discussion on limitations and applicability of the system. The distinction between multiple simultaneous sound sources is another problem. This is directly related to the appearance of outliers in the localization process. To solve this issue, an outlier removal method is proposed, which takes the properties of the observed sounds into consideration. In this paper a VR-based visualization method of the obtained results is also introduced. As the application scenario, we selected bird song analysis, which provides a challenging environment in terms of constantly changing signal-to-noise ratio and relative sensor-to-target position. A prototype system has been established using the proposed method. Several simulation results have been presented followed by a discussion on the issues. This leads to establishing system design guidelines that ensure a predictable performance.  相似文献   

3.
ContextIdentifying suitable components during the software design phase is an important way to obtain more maintainable software. Many methods including Graph Partitioning, Clustering-based, CRUD-based, and FCA-based methods have been proposed to identify components at an early stage of software design. However, most of these methods use classical clustering techniques, which rely on expert judgment.ObjectiveIn this paper, we propose a novel method for component identification, called SBLCI (Search-Based Logical Component Identification), which is based on GA (genetic algorithm), and complies with an iterative scheme to obtain logical components.MethodSBLCI identifies logical components of a system from its analysis models using a customized GA, which considers cohesion and coupling metrics as its fitness function, and has four novel guided GA operators based on the cohesive component concept. In addition, SBLCI has an iterative scheme in which it initially identifies high-level components in the first iteration. Then, in the next iterations, it identifies low-level sub-components for each identified component in previous iterations.ResultsWe evaluated the effectiveness of SBLCI with three real-world cases. Results revealed that SBLCI is a better alternative for identifying logical components and sub-components in comparison with existing component identification methods.  相似文献   

4.
5.
ContextA common distributed intelligent system architecture is Multi Agent Systems (MASs). Creating systems with this architecture has been recently supported by Agent Oriented Software Engineering (AOSE) methodologies. But two questions remain: how do we determine the suitability of a MAS implementation for a particular problem? And can this be determined without AOSE expertise?ObjectiveGiven the relatively small number of software engineers that are AOSE experts, many problems that could be better solved with a MAS system are solved using more commonly known but not necessarily as suitable development approaches (e.g. object-oriented). The paper aims to empower software engineers, who are not necessarily AOSE experts, in deciding whether or not they should advocate the use of an MAS technology for a given project.MethodThe paper will construct a systematic framework to identify key criteria in a problem requirement definition to assess the suitability of a MAS solution. The criteria are first identified using an iterative process. The features are initially identified from MAS implementations, and then validated against related work. This is followed by a statistical analysis of 25 problems that characterise agent-oriented solutions previously developed to group features into key criteria.ResultsKey criteria were sufficiently prominent using factor analysis to construct a framework which provides a process that identifies within the requirements the criteria discovered. This framework is then evaluated for assessing suitability of a MAS architecture, by non-AOSE experts, on two real world problems: an electricity market simulation and a financial accounting system.ConclusionSubstituting a software engineer’s personal inclination to (or not to) use a MAS, our framework provides an objective mechanism. It can supplant current practices where the decision to use a MAS architecture for a given problem remains an informal process. It was successfully illustrated on two real world problems to assess the suitability of a MAS implementation. This paper will potentially facilitate the take up of MAS technology.  相似文献   

6.
ContextGiven the increased interest in using visualization techniques (VTs) to help communicate and understand software architecture (SA) of large scale complex systems, several VTs and tools have been reported to represent architectural elements (such as architecture design, architectural patterns, and architectural design decisions). However, there is no attempt to systematically review and classify the VTs and associated tools reported for SA, and how they have been assessed and applied.ObjectiveThis work aimed at systematically reviewing the literature on software architecture visualization to develop a classification of VTs in SA, analyze the level of reported evidence and the use of different VTs for representing SA in different application domains, and identify the gaps for future research in the area.MethodWe used systematic literature review (SLR) method of the evidence-based software engineering (EBSE) for reviewing the literature on VTs for SA. We used both manual and automatic search strategies for searching the relevant papers published between 1 February 1999 and 1 July 2011.ResultsWe selected 53 papers from the initially retrieved 23,056 articles for data extraction, analysis, and synthesis based on pre-defined inclusion and exclusion criteria. The results from the data analysis enabled us to classify the identified VTs into four types based on the usage popularity: graph-based, notation-based, matrix-based, and metaphor-based VTs. The VTs in SA are mostly used for architecture recovery and architectural evolution activities. We have also identified ten purposes of using VTs in SA. Our results also revealed that VTs in SA have been applied to a wide range of application domains, among which “graphics software” and “distributed system” have received the most attention.ConclusionSA visualization has gained significant importance in understanding and evolving software-intensive systems. However, only a few VTs have been employed in industrial practice. This review has enabled us to identify the following areas for further research and improvement: (i) it is necessary to perform more research on applying visualization techniques in architectural analysis, architectural synthesis, architectural implementation, and architecture reuse activities; (ii) it is essential to pay more attention to use more objective evaluation methods (e.g., controlled experiment) for providing more convincing evidence to support the promised benefits of using VTs in SA; (iii) it is important to conduct industrial surveys for investigating how software architecture practitioners actually employ VTs in architecting process and what are the issues that hinder and prevent them from adopting VTs in SA.  相似文献   

7.
BackgroundDetection and monitoring of respiratory related illness is an important aspect in pulmonary medicine. Acoustic signals extracted from the human body are considered in detection of respiratory pathology accurately.ObjectivesThe aim of this study is to develop a prototype telemedicine tool to detect respiratory pathology using computerized respiratory sound analysis.MethodsAround 120 subjects (40 normal, 40 continuous lung sounds (20 wheeze and 20 rhonchi)) and 40 discontinuous lung sounds (20 fine crackles and 20 coarse crackles) were included in this study. The respiratory sounds were segmented into respiratory cycles using fuzzy inference system and then S-transform was applied to these respiratory cycles. From the S-transform matrix, statistical features were extracted. The extracted features were statistically significant with p < 0.05. To classify the respiratory pathology KNN, SVM and ELM classifiers were implemented using the statistical features obtained from of the data.ResultsThe validation showed that the classification rate for training for ELM classifier with RBF kernel was high compared to the SVM and KNN classifiers. The time taken for training the classifier was also less in ELM compared to SVM and KNN classifiers. The overall mean classification rate for ELM classifier was 98.52%.ConclusionThe telemedicine software tool was developed using the ELM classifier. The telemedicine tool has performed extraordinary well in detecting the respiratory pathology and it is well validated.  相似文献   

8.
This paper presents a combination of Reference Attributed Grammars (RAGs) and Circular Attribute Grammars (CAGs). While RAGs allow the direct and easy specification of non-locally dependent information, CAGs allow iterative fixed-point computations to be expressed directly using recursive (circular) equations. We demonstrate how the combined formalism, Circular Reference Attributed Grammars (CRAGs), can take advantage of both these strengths, making it possible to express solutions to many problems in an easy way. We exemplify with the specification and computation of the nullable, first, and follow sets used in parser construction, a problem which is highly recursive and normally programmed by hand using an iterative algorithm. We also present a general demand-driven evaluation algorithm for CRAGs and some optimizations of it. The approach has been implemented and experimental results include computations on a series of grammars including that of Java 1.2. We also revisit some of the classical examples of CAGs and show how their solutions are facilitated by CRAGs.  相似文献   

9.
Abstract It is well-known that Muller’s method for the computation of the zeros of continuous functions has order ≈ 1.84 [10], and does not have the character of global convergence. Muller’s method is based on the interpolating polynomial built on the last three points of the iterative sequence. In this paper the authors take as nodes of the interpolating polynomial the last two points of the sequence and the middle point between them. The resulting method has order p=2 for regular functions. This method leads to a globally convergent algorithm because it uses dichotomic techniques. Many numerical examples are given to show how the proposed code improves on Muller’s method.  相似文献   

10.
Automatic music composition and sound synthesis is a field of study that gains continuously increasing attention. The introduction of evolutionary computation has further boosted the research towards exploring ways to incorporate human supervision and guidance in the automatic evolution of melodies and sounds. This kind of human–machine interaction belongs to a larger methodological context called interactive evolution (IE). For the automatic creation of art and especially for music synthesis, user fatigue requires that the evolutionary process produces interesting content that evolves fast. This paper addresses this issue by presenting an IE system that evolves melodies using genetic programming (GP). A modification of the GP operators is proposed that allows the user to have control on the randomness of the evolutionary process. The results obtained by subjective tests indicate that the utilization of the proposed genetic operators drives the evolution to more user-preferable sounds.  相似文献   

11.
《国际计算机数学杂志》2012,89(11):1201-1209

In [5] a new iterative method is given for the linear system of equations Au=b , where A is large, sparse and nonsymmetrical and A^{\rm T}+A is symmetric and positive definite (SPD) or equivalently A is positive real. The new iterative method is based on a mixed-type splitting of the matrix A and is called the mixed-type splitting iterative method. The iterative method contains an auxiliary matrix D_1 that is restricted to be symmetric. In this note, the auxiliary matrix is allowed to be more general and it is shown that by proper choice of D 1 , the new iterative method is still convergent. It is also shown that by special choice of D_{1} , the new iterative method becomes the well-known (point) accelerated overrelaxation (AOR) [1] method. Hence, it is shown that the (point) AOR method applied to the positive real system is convergent under the proper choice of the overrelaxation parameters y and .  相似文献   

12.
This paper is concerned with the technique called discrete‐time noncausal linear periodically time‐varying (LPTV) scaling for robust stability analysis and synthesis. It is defined through the lifting treatment of discrete‐time systems, and naturally leads to a sort of noncausal operation of signals. In the robust stability analysis of linear time‐invariant (LTI) systems, it has been shown that even static noncausal LPTV scaling induces some frequency‐dependent scaling when it is interpreted in the context of lifting‐free treatment. This paper first discusses in detail different aspects of the effectiveness of noncausal LPTV scaling, with the aim of showing its effectiveness in controller synthesis. More precisely, we study the robust performance controller synthesis problem, where we allow the controllers to be LPTV. As in the LTI robust performance controller synthesis problem, we tackle our problem with an iterative method without guaranteed convergence to a globally optimal controller. Despite such a design procedure, the closed‐loop H performance is expected to improve as the period of the controller is increased, and we discuss how the frequency‐domain properties of noncausal LPTV scaling could contribute to such improvement. We demonstrate with a numerical example that an effective LPTV controller can be designed for a class of uncertainties for which the well‐known μ‐synthesis fails to derive even a robust stabilization controller.  相似文献   

13.
ABSTRACT

The NK model has been used widely to explore aspects of natural evolution and complex systems. This paper introduces a modified form of the NK model for exploring distributed control in complex systems such as organisations, social networks, collective robotics, etc. Initial results show how varying the size and underlying functional structure of a given system affects the performance of different distributed control structures and decision making, including within dynamically formed structures and those with differing numbers of control nodes.  相似文献   

14.
ContextIn the last decade, software development has been characterized by two major approaches: agile software development, which aims to achieve increased velocity and flexibility during the development process, and user-centered design, which places the goals and needs of the system’s end-users at the center of software development in order to deliver software with appropriate usability. Hybrid development models, referred to as user-centered agile software development (UCASD) in this article, propose to combine the merits of both approaches in order to design software that is both useful and usable.ObjectiveThis paper aims to capture the current state of the art in UCASD approaches and to derive generic principles from these approaches. More specifically, we investigate the following research question: Which principles constitute a user-centered agile software development approach?MethodWe conduct a systematic review of the literature on UCASD. Identified works are analyzed using a coding scheme that differentiates four levels of UCASD: the process, practices, people/social and technology dimensions. Through subsequent synthesis, we derive generic principles of UCASD.ResultsWe identified and analyzed 83 relevant publications. The analysis resulted in a comprehensive coding system and five principles for UCASD: (1) separate product discovery and product creation, (2) iterative and incremental design and development, (3) parallel interwoven creation tracks, (4) continuous stakeholder involvement, and (5) artifact-mediated communication.ConclusionOur paper contributes to the software development body of knowledge by (1) providing a broad overview of existing works in the area of UCASD, (2) deriving an analysis framework (in form a coding system) for works in this area, going beyond former classifications, and (3) identifying generic principles of UCASD and associating them with specific practices and processes.  相似文献   

15.
16.
Recent advances in physics-based sound synthesis have unveiled numerous possibilities for the creation of new musical instruments. Despite the fact that research on physics-based sound synthesis has been going on for three decades, its higher computational complexity compared to that of signal modeling has limited its use in real-time applications. This limitation has motivated research on parallel processing architectures that support the physics-based sound synthesis of musical instruments. In this paper, we present analytical results of the design space exploration of many-core processors for the physics-based sound synthesis of plucked-string instruments including acoustic guitar, classical guitar and the gayageum, which is representative of a Korean plucked-string instrument. We do so by quantitatively evaluating the significance of a sample-per-processing-element (SPE) ratio–i.e., the amount of sample data directly mapped to each processing element, which is equivalent to varying the number of processing elements for a fixed sample size on system performance and efficiency using architectural and workload simulations. The effect of the sample-to-processor ratio is difficult to analyze because it fundamentally affects both hardware and software design when varied. In addition, the optimal SPE ratio is not typically at either extreme of its range–i.e., one sample per processor or one processor per an entire sample. This paper illustrates the correlation between a fixed problem sample size, SPE ratio and processing element (PE) architecture for a target implementation in 130-nm CMOS technology. Experimental results indicate that an SPE in the range of 5513 to 2756, which is equivalent to 48 to 96 PEs for guitars and 96 to 192 PEs for the gayageum, provides the most efficient operation for the synthesis of musical sounds sampled at 44.1 kHz, yielding the highest task throughput per unit area or per unit energy. In addition, the produced synthesized sounds appear to be very similar to the original sounds, and the selected optimal many-core configurations outperform commercial processor architectures including DSPs, FPGAs, and GPUs in terms of area efficiency and energy efficiency.  相似文献   

17.
This paper addresses the design problem of robust iterative learning controllers for a class of linear discrete-time systems with norm-bounded parameter uncertainties. An iterative learning algorithm with current cycle feedback is proposed to achieve both robust convergence and robust stability. The synthesis problem of the proposed iterative learmng control (ILC) system is reformulated as a γ-suboptimal H-infinity control problem via the linear fractional transformation (LFT). A sufficient condition for the convergence of the ILC algorithm is presented in terms of linear matrix inequalities (LMIs). Furthermore, the linear wansfer operators of the ILC algorithm with high convergence speed are obtained by using existing convex optimization techniques. The simulation results demonstrate the effectiveness of the proposed method.  相似文献   

18.
目的 图像去噪是图像处理的难题,其难点是在尽量滤除噪声的同时对图像信息进行保持。针对该难点,本文提出了一种将非局部相似性和高阶奇异值分解(HOSVD)相融合,并利用均方差(MSE)迭代对图像进行去噪的iHOSVD算法。方法 首先利用非局部相似块聚类和高阶奇异值分解构建数据自适应的3维变换基及其变换系数;其次,对变换系数进行阈值处理后进行3维反变换,从而达到非局部协同滤波的目的;最后,由于一次去噪操作无法达到理想的去噪效果,采用一种基于均方差最优的迭代方法对图像进行去噪,并证明该迭代是一个权衡偏差和方差使得均方差达到最优的过程。结果 实验结果表明,iHOSVD算法既能够有效地去除噪声,又能够很好地保持纹理细节信息。结论 本文所提的图像去噪iHOSVD算法结合了非局部协同滤波与数据自适应去噪的思想,通过对3种高水平去噪算法BM3D、NCSR和PLOW的比较实验发现,不仅表现了较强的图像去噪能力,而且在图像纹理细节保持方面效果最好,适用于纹理信息较强的图像。  相似文献   

19.
The notion of rational transduction is a valuable tool to compare the structures of different languages, in particular context-free languages.The explanation of this is a powerful property of rational transductions with respect to certain iterative pairs [8] and systems of iterative pairs (we define this notion in this paper), in a context-free language. (Intuitively, systems of iterative pairs describe combinations of simultaneous iterative pairs in a context-free language.)This property is the so-called Transfer Theorem, whose terms are:Let A and B be two context-free languages and let T be a rational transduction such that T(B) = A. If A has a strict system of iterative pairs σ, then B has a strict system of iterative pairs σ′, of the same type than σ.(This theorem has been proved in [8] for iterative pairs and we prove it here for systems of iterative pairs.)This theorem means that any combination of certain iterative pairs in the image language by a rational transduction must appear, in a similar way, in the source language.The main result of this paper is obtained by using the previous Transfer Theorem. This result is a characterization of context-free generators i.e. generators of the rational cone or, equivalently [10], of the full-AFL of context-free languages.  相似文献   

20.
ABSTRACT

This paper introduces a new electronic system for dropping courses that do not rely on the cumbersome paper system traditionally used by educational institutions. The new sub-system of the On-Demand University Services (ODUS Plus) will eliminate the hierarchical and inefficient paper system, which results in institutional delays. By using both quantitative and qualitative techniques, as well as a correlational methodology, it will be shown that, withdraw approval requests have streamlined, thus improving the system to process such requests. The following study will outline the efficacy of using ODUS Plus for course withdrawals by college and university administrators. The findings suggest that the implementation of such a system has a dual value; namely, it improves efficiency and functions as a source of knowledge regarding student academic preferences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号