首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Applied Soft Computing》2007,7(2):612-625
Digital mammography is one of the most suitable methods for early detection of breast cancer. It uses digital mammograms to find suspicious areas containing benign and malignant microcalcifications. However, it is very difficult to distinguish benign and malignant microcalcifications. This is reflected in the high percentage of unnecessary biopsies that are performed and many deaths caused by late detection or misdiagnosis. A computer based feature selection and classification system can provide a second opinion to the radiologists in assessment of microcalcifications. The research in this paper proposes a neural-genetic algorithm for feature selection to classify microcalcification patterns in digital mammograms. It aims to develop a step-wise algorithm to find the best feature set and a suitable neural architecture for microcalcification classification. The obtained results show that the proposed algorithm is able to find an appropriate feature subset, which also produces a high classification rate.  相似文献   

2.
Integer multiplication as one of the basic arithmetic functions has been in the focus of several complexity theoretical investigations. Ordered binary decision diagrams (OBDDs) are one of the most common dynamic data structures for boolean functions. Among the many areas of application are verification, model checking, computer-aided design, relational algebra, and symbolic graph algorithms. In this paper it is shown that the OBDD complexity of the most significant bit of integer multiplication is exponential answering an open question posed by Wegener (2000) [18].  相似文献   

3.
Integer multiplication as one of the basic arithmetic functions has been in the focus of several complexity theoretical investigations and ordered binary decision diagrams (OBDDs) are one of the most common dynamic data structures for Boolean functions. Only in 2008 the question whether the deterministic OBDD complexity of the most significant bit of integer multiplication is exponential has been answered affirmatively. Since probabilistic methods have turned out to be useful in almost all areas of computer science, one may ask whether randomization can help to represent the most significant bit of integer multiplication in smaller size. Here, it is proved that the randomized OBDD complexity is also exponential.  相似文献   

4.
Envelopes, which represent the overall information of the signal amplitude variation, are the necessary mediums in many signal decomposition methods. The phenomena of undershoot and overshoot result in the error of envelope estimation and unexpected signal decomposition components. In this paper, an accurate envelope estimation method, called the empirical optimal envelope (EOE), is proposed and applied to the local mean decomposition (LMD). First, an indicator of envelope distance is defined to describe the features of the ideal envelope. Utilizing the indicator, an iterative algorithm for the approximation of tangency points is designed. The tangency points, instead of the extreme points, are interpolated to realize the EOE. Then, the EOE is integrated with the LMD, and two interpolation functions, the cubic spline and the piecewise cubic Hermite interpolating polynomial, are combined to improve the efficiency and convergence of signal decomposition. Finally, the proposed method is verified by simulated signals and actual signals.  相似文献   

5.
In this note, we investigate different concepts of nonlinear identifiability in the generic sense. We work in the linear algebraic framework. Necessary and sufficient conditions are found for geometrical identifiability, algebraic identifiability and identifiability with known initial conditions. Relationships between different concepts are characterized. Constructive procedures are worked out for both generic geometrical and algebraic identifiability of nonlinear systems. As an application of the theory developed, we study the identifiability properties of a four dimensional model of HIV/AIDS. The questions answered in this study include the minimal number of measurement of the variables for a complete determination of all parameters and the best period of time to make such measurements. This information will be useful in formulating guidelines for clinical practice.  相似文献   

6.
Projection of transmembrane helices using a Uniform B-spline Algorithm is a tool for the visualization of interactions between helices in membrane proteins. It allows the user to generate projections of 3D helices, no matter what their deviations from a canonical helix might be. When associated with adapted coloring schemes it facilitates the comprehension of helix-helix interactions. Examples of transmembrane proteins were chosen to illustrate the advantages that this method provides. In the glycophorin A dimer we can easily appreciate the structural features behind homodimerisation. Using the structure of the fumarate reductase we analyze the contact surfaces inside a helical bundle and thanks to structures from a molecular dynamics simulation we see how modifications in structure and electrostatics relate to their interaction. We propose the use of this tool as an aid to the visualization and analysis of transmembrane helix surfaces and properties.  相似文献   

7.
Hidden Markov models (HMMs) are often used for biological sequence annotation. Each sequence feature is represented by a collection of states with the same label. In annotating a new sequence, we seek the sequence of labels that has highest probability. Computing this most probable annotation was shown NP-hard by Lyngsø and Pedersen [R.B. Lyngsø, C.N.S. Pedersen, The consensus string problem and the complexity of comparing hidden Markov models, J. Comput. System Sci. 65 (3) (2002) 545–569]. We improve their result by showing that the problem is NP-hard for a specific HMM, and present efficient algorithms to compute the most probable annotation for a large class of HMMs, including abstractions of models previously used for transmembrane protein topology prediction and coding region detection. We also present a small experiment showing that the maximum probability annotation is more accurate than the labeling that results from simpler heuristics.  相似文献   

8.
This paper applies a multiobjective goal programming (GP) model to define the profile of the most profitable insurers by focusing on 14 firm‐decision variables and considering different scenarios resulting from the exogenous change in interest rate and GDP per capita growth variables. We consider a detailed database of Spanish non‐life insurers over the period 2003–2012 taking into account two dimensions of insurers’ results: underwriting results and investment results. A prior econometric analysis is used to find out relevant relations among the variables. Next, a GP model is formulated on the basis of the relationships obtained. The model is tested in a robust environment, allowing changes in the coefficients of the objective functions, and for several scenarios regarding crisis/noncrisis situations and changes in interest rates. We find that having the stock organizational form, being an unaffiliated single company and maintaining low levels of investment risk, leverage, and regulatory solvency are recommended for result optimization. Growth and reinsurance utilization are not advisable for optimizing the results, whereas size should be positively emphasized even more in instability periods and when interest rates increase. The results also show that the optimal level of the diversification/specialization strategy depends on economic conditions. More specialization is advisable as negative changes in interest rates increase. However, we find that the optimal values of the diversification variable are higher for the crisis scenarios compared to the corresponding noncrisis scenarios, suggesting that diversification creates value in crisis. Further sensitivity analyses show the soundness of the conclusions obtained.  相似文献   

9.
Efficient implementation of a neural network-based strategy for the online adaptive control of complex dynamical systems characterized by an interconnection of several subsystems (possibly nonlinear) centers on the rapidity of the convergence of the training scheme used for learning the system dynamics. For illustration, in order to achieve a satisfactory control of a multijointed robotic manipulator during the execution of high speed trajectory tracking tasks, the highly nonlinear and coupled dynamics together with the variations in the parameters necessitate a fast updating of the control actions. For facilitating this requirement, a multilayer neural network structure that includes dynamical nodes in the hidden layer is proposed, and a supervised learning scheme that employs a simple distributed updating rule is used for the online identification and decentralized adaptive control. Important characteristic features of the resulting control scheme are discussed and a quantitative evaluation of its performance in the above illustrative example is given.  相似文献   

10.
11.
A logical design that describes the overall structure of proteins, together with a more detailed design describing secondary and some supersecondary structures, has been constructed using the computer-aided software engineering (CASE) tool, Auto-mate. Auto-mate embodies the philosophy of the Structured Systems Analysis and Design Method (SSADM) which enables the logical design of computer systems. Our design will facilitate the building of large information systems, such as databases and knowledgebases in the field of protein structure, by the derivation of system requirements from our logical model prior to producing the final physical system. In addition, the study has highlighted the ease of employing SSADM as a formalism in which to conduct the transferral of concepts from an expert into a design for a knowledge-based system that can be implemented on a computer (the knowledge-engineering exercise). It has been demonstrated how SSADM techniques may be extended for the purpose of modelling the constituent Prolog rules. This facilitates the integration of the logical system design model with the derived knowledge-based system.  相似文献   

12.
《Automatica》1987,23(4):497-507
The place of parameter-bounding algorithms in identification methodology is discussed. Such algorithms provide a radical alternative to the computation of parameter point estimates and covariances. Instead of p.d.f. or mean and covariance for the noise and prior parameter estimates, they require bounds in an essentially deterministic model formulation. The paper is a preliminary review of some standard tasks, namely experiment design, testing for outliers, toleranced prediction and worst-case control design, in the context of parameter-bounding identification. Computational requirements of these tasks are examined, and the limitations of existing parameter-bounding algorithms are noted.  相似文献   

13.
This paper presents several algorithms for projecting points so as to give the most uniform distribution. Givenn points in the plane and an integerb, the problem is to find an optimal angle ofb equally spaced parallel lines such that points are distributed most uniformly over buckets (regions bounded by two consecutive lines). An algorithm is known only in thetight case in which the two extreme lines are the supporting lines of the point set. The algorithm requiresO(bn2 logn) time and On2+bn) space to find an optimal solution. In this paper we improve the algorithm both in time and space, based on duality transformation. Two linear-space algorithms are presented. One runs in On2+K log n+bn) time, whereK is the number of intersections in the transformed plane.K is shown to beO(@#@ n2+bn@#@) based on a new counting scheme. The other algorithm is advantageous ifb < n. It performs a simplex range search in each slab to enumerate all the lines that intersectbucket lines, and runs in O(b0.610n1.695+K logn) time. It is also shown that the problem can be solved in polynomial time even in therelaxed case. Its one-dimensional analogue is especially related to the design of an optimal hash function for a static set of keys.This work was supported in part by a Grant in Aid for Scientific Research of the Ministry of Education, Science, and Cultures of Japan.  相似文献   

14.
An identification algorithm is presented for a pollution source distributed along a river. The water quality is described by a couple of partial differential equations of the first order or a parabolic one. This problem, after transformation, results in the Fredholm integral equation of the first kind, which is non-well posed in the sense of Hadamard. The solution does not continuously depend on the observed data. The application of the regularization method proposed by Tikhonov (1963) yields a stable algorithm for the identification problem. Some simulation examples are given to illustrate the applicability of this method to environmental control systems based on the convective or dispersive phenomena.  相似文献   

15.
Hong Shen 《Acta Informatica》1999,36(5):405-424
For a connected, undirected and weighted graph G = (V,E), the problem of finding the k most vital edges of G with respect to minimum spanning tree is to find k edges in G whose removal will cause greatest weight increase in the minimum spanning tree of the remaining graph. This problem is known to be NP-hard for arbitraryk. In this paper, we first describe a simple exact algorithm for this problem, based on t he approach of edge replacement in the minimum spanning tree of G. Next we present polynomial-time randomized algorithms that produce optimal and approximate solutions to this problem. For and , our algorithm producing optimal solution has a time complexity of O(mn) with probability of success at least , which is 0.90 for and asymptotically 1 when k goes to infinity. The algorithm producing approximate solution runs in time with probability of success at least , which is 0.998 for , and produces solution within factor 2 to the optimal one. Finally we show that both of our randomized algorithms can be easily parallelized. On a CREW PRAM, the first algorithm runs in O(n) time using processors, and the second algorithm runs in time using mn/logn processors and hence is RNC. Received 30 October 1995 / 5 November 1998  相似文献   

16.
A key problem in optimal input design is that the solution depends on system parameters to be identified. In this contribution we provide formal results for convergence and asymptotic optimality of an adaptive input design method based on the certainty equivalence principle, i.e. for each time step an optimal input design problem is solved exactly using the present parameter estimate and one sample of this input is applied to the system. The results apply to stable ARX systems with the input restricted to be generated by white noise filtered through a finite impulse response filter, or a binary signal obtained from the latter by a static nonlinearity.  相似文献   

17.
针对蛋白质相互作用(protein-protein interaction,PPI)网络中存在大量噪声以及现有关键蛋白识别方法准确率不高等问题,提出了一种基于中心性和模块特性(united centrality and modularity,UCM)的方法来识别关键蛋白质。首先,整合蛋白质拓扑数据和生物数据构建多元属性网络,以降低PPI网络中噪声的影响;其次,根据关键蛋白质的拓扑特性和生物特性,提出一种挖掘稠密且高度共表达的关键模块算法,从多元属性网络中挖掘高可靠性的关键模块,以从多维角度强化关键蛋白质在模块中的重要程度;最后,整合蛋白质的中心性和模块化特性,设计一种衡量蛋白质关键性的策略(essential integration strategy,EIS),以提高识别高关键蛋白质的准确率。UCM方法应用在DIP数据集上进行验证,实验结果表明,与其他10种关键蛋白质识别方法相比较,该方法具有较好的识别性能,能够识别更多的关键蛋白质。  相似文献   

18.
Load balancing a distributed/parallel system consists in allocating work (load) to its processors so that they have to process approximately the same amount of work or amounts in relation with their computation power. In this paper, we present a new distributed algorithm that implements the Most to Least Loaded (M2LL) policy. This policy aims at indicating pairs of processors, that will exchange loads, taking into account actually broken edges as well as the current load distribution in the system. The M2LL policy fixes the pairs of neighboring processors by selecting in priority the most loaded and the least loaded processor of each neighborhood. Our first and main result is that the M2LL distributed implementation terminates after at most (n/2)⋅d t iterations where n and d t are respectively the number of nodes and the degree of the system at time t. We then present a performance comparison between Generalized Adaptive Exchange (GAE) that uses M2LL and Relaxed First Order Scheme (RFOS), two load balancing algorithms for dynamic networks in which only link failures are considered. The comparison is carried out on a dedicated test bed that we have designed and implemented to this end. Our second important result is that although generating more communications, the GAE algorithm with the M2LL policy is faster than RFOS in balancing the system load. In addition, GAE M2LL is able to achieve a more stable balanced state than RFOS and scales well.  相似文献   

19.
In the support of environmental management, models are frequently used. The outcomes of these models however, rarely show a perfect resemblance to the real-world system behavior. This is due to uncertainties, introduced during the process of abstracting information about the system to include it in the model. To provide decision makers with realistic information about these model outcomes, uncertainty analysis is indispensable. Because of the multiplicity of frameworks available for uncertainty analysis, the outcomes of such analyses are rarely comparable. In this paper a method for structured identification and classification of uncertainties in the application of environmental models is presented. We adapted an existing uncertainty framework to enhance the objectivity in the uncertainty identification process. Two case studies demonstrate how it can help to obtain an overview of unique uncertainties encountered in a model. The presented method improves the comparability of uncertainty analyses in different model studies and leads to a coherent overview of uncertainties affecting model outcomes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号