首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Abstract The knapsack problem is well known to be NP-complete. Due to its importance in cryptosystem and in number theory, in the past two decades, much effort has been made in order to find techniques that could lead to practical algorithms with reasonable running time. This paper proposes a new parallel algorithm for the knapsack problem where the optimal merging algorithm is adopted. The proposed algorithm is based on an EREW-SIMD machine with shared memory. It is proved that the proposed algorithm is both optimal and the first without memory conflicts algorithm for the knapsack problem. The comparisons of algorithm performance show that it is an improvement over the past researches.  相似文献   

2.
The Web comprises of voluminous rich learning content.The volume of ever growing learning resources however leads to the problem of information overload.A large number of irrelevant search results generated from search engines based on keyword matching techniques further augment the problem.A learner in such a scenario needs semantically matched learning resources as the search results.Keeping in view the volume of content and significance of semantic knowledge,our paper proposes a multi-threaded semantic focused crawler(SFC) specially designed and implemented to crawl on the WWW for educational learning content.The proposed SFC utilizes domain ontology to expand a topic term and a set of seed URLs to initiate the crawl.The results obtained by multiple iterations of the crawl on various topics are shown and compared with the results obtained by executing an open source crawler on the similar dataset.The results are evaluated using Semantic Similarity,a vector space model based metric,and the harvest ratio.  相似文献   

3.
Life science research aims to continuously improve the quality and standard of human life. One of the major challenges in this area is to maintain food safety and security. A number of image processing techniques have been used to investigate the quality of food products. In this paper, we propose a new algorithm to effectively segment connected grains so that each of them can be inspected in a later processing stage. One family of the existing segmentation methods is based on the idea of watersheding, and it has shown promising results in practice. However, due to the over-segmentation issue, this technique has experienced poor performance in various applications, such as inhomogeneous background and connected targets. To solve this problem, we present a combination of two classical techniques to handle this issue. In the first step, a mean shift filter is used to eliminate the inhomogeneous background,where entropy is used to be a converging criterion. Secondly, a color gradient algorithm is used in order to detect the most significant edges, and a marked watershed transform is applied to segment cluttered objects out of the previous processing stages. The proposed framework is capable of compromising among execution time, usability, efficiency and segmentation outcome in analyzing ring die pellets. The experimental results demonstrate that the proposed approach is effectiveness and robust.  相似文献   

4.
One view of finding a personalized solution of reduct in an information system is grounded on the viewpoint that attribute order can serve as a kind of semantic representation of user requirements. Thus the problem of finding personalized solutions can be transformed into computing the reduct on an attribute order. The second attribute theorem describes the relationship between the set of attribute orders and the set of reducts, and can be used to transform the problem of searching solutions to meet user requirements into the problem of modifying reduct based on a given attribute order. An algorithm is implied based on the second attribute theorem, with computation on the discernibility matrix. Its time complexity is O(n^2 × m) (n is the number of the objects and m the number of the attributes of an information system). This paper presents another effective second attribute algorithm for facilitating the use of the second attribute theorem, with computation on the tree expression of an information system. The time complexity of the new algorithm is linear in n. This algorithm is proved to be equivalent to the algorithm on the discernibility matrix.  相似文献   

5.
Interest in inverse reinforcement learning (IRL) has recently increased,that is,interest in the problem of recovering the reward function underlying a Markov decision process (MDP) given the dynamics of the system and the behavior of an expert.This paper deals with an incremental approach to online IRL.First,the convergence property of the incremental method for the IRL problem was investigated,and the bounds of both the mistake number during the learning process and regret were provided by using a detailed proof.Then an online algorithm based on incremental error correcting was derived to deal with the IRL problem.The key idea is to add an increment to the current reward estimate each time an action mismatch occurs.This leads to an estimate that approaches a target optimal value.The proposed method was tested in a driving simulation experiment and found to be able to efficiently recover an adequate reward function.  相似文献   

6.
Abe et al. proposed the methodology of ring signature (RS) design in 2002 and showed how to construct RS with a mixture of public keys based on factorization and/or discrete logarithms. Their methodology cannot be applied to knowledge signatures (KS) using the Fiat-Shamir heuristic and cut-and-choose techniques, for instance, the Goldreich KS. This paper presents a more general construction of RS from various public keys if there exists a secure signature using such a public key and an efficient algorithm to forge the relation to be checked if the challenges in such a signature are known in advance. The paper shows how to construct RS based on the graph isomorphism problem (GIP). Although it is unknown whether or not GIP is NP-Complete, there are no known arguments that it can be solved even in the quantum computation model. Hence, the scheme has a better security basis and it is plausibly secure against quantum adversaries.  相似文献   

7.
Fault localization techniques are originally proposed to assist in manual debugging by generally producing a rank list of suspicious locations.With the increasing popularity of automated program repair,the fault localization techniques have been introduced to effectively reduce the search space of automated program repair.Unlike developers who mainly focus on the rank information,current automated program repair has two strategies to use the fault localization information:suspiciousness-first algorithm(SFA)based on the suspiciousness accuracy and rank-first algorithm(RFA)relying on the rank accuracy.However,despite the fact that the two different usages are widely adopted by current automated program repair and may result in different repair results,little is known about the impacts of the two strategies on automated program repair.In this paper we empirically compare the performance of SFA and RFA in the context of automated program repair.Specifically,we implement the two strategies and six well-studied fault localization techniques into four state-of-the-art automated program repair tools,and then use these tools to perform repair experiments on 60 real-world bugs from Defects4J.Our study presents a number of interesting findings:RFA outperforms SFA in 70.02%of cases when measured by the number of candidate patches generated before a valid patch is found(NCP),while SFA performs better in parallel repair and patch diversity;the performance of SFA can be improved by increasing the suspiciousness accuracy of fault localization techniques;finally,we use SimFix that deploys SFA to successfully repair four extra Defects4J bugs which cannot be repaired by SimFix originally using RFA.These observations provide a new perspective for future research on the usage and improvement of fault localization in automated program repair.  相似文献   

8.
Facial Feature Extraction Method Based on Coefficients of Variances   总被引:1,自引:0,他引:1       下载免费PDF全文
Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature extraction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.  相似文献   

9.
Fault—Tolerant Tree—Based Multicasting in Mesh Multicomputers   总被引:1,自引:1,他引:0       下载免费PDF全文
We propose a fault-tolerant tree-based multicast algorithm for 2-dimensional (2-D) meshes based on the concept of the extended safety level which is a vector associated with each node to capture fault information in the neighborhood.In this approach each destination is reached through a minimum number of hops,In order to minimize the total number of traffic steps,three heuristic strategies are proposed.This approach can be easily implemented by pipelined circuit switching(PCS).A simulation study is conducted to measure the total number of traffic steps under different strategies.Our approach is the first attempt to address the faulttolerant tree-based multicast strategies.Our approach is the first attempt to address the faulttolerant tree-based multicast problem in 2-D meshes based on limited global information with a simple model and succinct information.  相似文献   

10.
Geometry based block partitioning(GBP) has been shown to achieve better performance than the tree structure based block partitioning(TSBP) of H.264.However,the residual blocks of GBP mode after motion compensation still present some nonvertical/non-horizontal orientations,and the conventional discrete cosine transform(DCT) may generate many high-frequency coefficients.To solve this problem,in this paper we propose a video coding approach by using GBP and reordering DCT(RDCT) techniques.In the proposed approach,GBP is first applied to partition the macroblocks.Then,before performing DCT,a reordering operation is used to adjust the pixel positions of the residual macroblocks based on the partition information.In this way,the partition information of GBP can be used to represent the reordering information of RDCT,and the bitrate can be reduced.Experimental results show that,compared to H.264/AVC,the proposed method achieves on average 6.38% and 5.69% bitrate reductions at low and high bitrates,respectively.  相似文献   

11.
In high-level synthesis of VLSI circuits,good lower bound prediction can efficiently narrow down the large space of possible designs.Previous approaches predict the lower bound by relaxing or even ignoring the precedence constraints of the data flow graph (DFG),and result in inaccuracy of the lower bound.The loop folding and conditional branch were also not considered,In this paper,a new stepwise refinement algorithm is proposed.which takes consideration of precedence constraints of the DFG to estimate the lower bound of hardware resources under time constratints,Processing techniques to handle multi-cycle,chaining,pipelining,as well as loop folding and mutual exclusion among conditional branches are also incorporated in the algorithm.Experimental results show that the algorithm can produce a very tight and close to optimal lower bound in reasonable computation time.  相似文献   

12.
This paper addresses the design problem of robust iterative learning controllers for a class of linear discrete-time systems with norm-bounded parameter uncertainties. An iterative learning algorithm with current cycle feedback is proposed to achieve both robust convergence and robust stability. The synthesis problem of the proposed iterative learmng control (ILC) system is reformulated as a γ-suboptimal H-infinity control problem via the linear fractional transformation (LFT). A sufficient condition for the convergence of the ILC algorithm is presented in terms of linear matrix inequalities (LMIs). Furthermore, the linear wansfer operators of the ILC algorithm with high convergence speed are obtained by using existing convex optimization techniques. The simulation results demonstrate the effectiveness of the proposed method.  相似文献   

13.
In this paper, a fuzzy Lyapunov approach is presented for stability analysis and state feedback H controller design for T-S fuzzy systems. A new stability condition is obtained by relaxing the ones derived in previous papers. Then, a set of LMI-based sufficient conditions which can guarantee the existence of state feedback H controller for T-S fuzzy systems is proposed. In comparison with the existing literature, the proposed approach not only provides more relaxed stability conditions but also ensures better H performance. The effectiveness of the proposed approach is shown through two numerical examples. Recommended by Editor Young-Hoon Joo. Xiao-Heng Chang received the B.E. and M.S. degrees from Liaoning Technical University, China, in 1998 and 2004, respectively, and the Ph.D. degree from Northeastern University, China, in 2007. He is currently a Lecturer in the School of Information Science and Engineering, Bohai University, China. His research interests include fuzzy control and robust control as well as their applications. Guang-Hong Yang received the B.S. and M.S. degrees in Northeast University of Technology, China, in 1983 and 1986, respectively, and the Ph.D. degree in Control Engineering from Northeastern University, China (formerly, Northeast University of Technology), in 1994. He was a Lecturer/Associate Professor with Northeastern University from 1986 to 1995. He joined the Nanyang Technological University in 1996 as a Postdoctoral Fellow. From 2001 to 2005, he was a Research Scientist/Senior Research Scientist with the National University of Singapore. He is currently a Professor at the College of Information Science and Engineering, Northeastern University. His current research interests include fault-tolerant control, fault detection and isolation, nonfragile control systems design, and robust control. Dr. Yang is an Associate Editor for the International Journal of Control, Automation, and Systems (IJCAS), and an Associate Editor of the Conference Editorial Board of the IEEE Control Systems Society.  相似文献   

14.
Slack variables approach is an important technique for tackling the delay-dependent stability problem for systems with time-varying delay. In this paper, a new delay-dependent stability criterion is presented without introducing any slack variable. The technique is based on a simply integral inequality. The result is shown to be equivalent to some existing ones but includes the least number of variables. Thus, redundant selection and computation can be avoided so that the computational burden can be largely reduced. Numerical examples are given to illustrate the effectiveness of the proposed stability conditions. Recommended by Editorial Board member Young Soo Suh under the direction of Editor Jae Weon Choi. The authors would like to thank the Associate Editor and the Reviewers for their very helpful comments and suggestions. This work was supported in part by the Funds for Creative Research Groups of China under Grant 60821063, by the State Key Program of National Natural Science of China under Grant 60534010, by the Funds of National Science of China under Grant 60674021, 60774013, 60774047, National 973 Program of China under Grant No. 2009CB320604, and by the Funds of Ph.D. program of MOE, China under Grant 20060145019 and the 111 Project B08015. Xun-Lin Zhu received the B.S. degree in Applied Mathematics from Information Engineering Institute, Zhengzhou, China, in 1986, the M.S. degree in basic mathematics from Zhengzhou University, Zhengzhou, China, in 1989, and the Ph.D. degree in Control Theory and Engineer-ing from Northeastern University, Shenyang, China, in 2008. Currently, he is an Associate Professor at Zhengzhou University of Light Industry, Zhengzhou, China. His research interests include neural networks and networked control systems. Guang-Hong Yang received the B.S. and M.S. degrees in Northeast University of Technology, China, in 1983 and 1986, respectively, and the Ph.D. degree in Control Engineering from Northeastern University, China (formerly, Northeast University of Technology), in 1994. He was a Lecturer/Associate Professor with Northeastern University from 1986 to 1995. He joined the Nanyang Technological University in 1996 as a Postdoctoral Fellow. From 2001 to 2005, he was a Research Scientist/Senior Research Scientist with the National University of Singapore. He is currently a Professor at the College of Information Science and Engineering, Northeastern University. His current research interests include fault-tolerant control, fault detection and isolation, non-fragile control systems design, and robust control. Dr. Yang is an Associate Editor for the International Journal of Control, Automation, and Systems (IJCAS), and an Associate Editor of the Conference Editorial Board of the IEEE Control Systems Society. Tao Li was born in 1979. He is now pursuing a Ph.D. degree in Research Institute of Automation Southeast University, China. His current research interests include time-delay systems, neural networks, robust control, fault detection and diagnosis. Chong Lin received the B.Sci and M.Sci in Applied Mathematics from the Northeastern University, China, in 1989 and 1992, respectively, and the Ph.D in Electrical and Electronic Engineering from the Nanyang Technological University, Singapore, in 1999. He was a Research Associate with the University of Hong Kong in 1999. From 2000 to 2006, he was a Research Fellow with the National University of Singapore. He is currently a Profesor with the Institute of Complexity Science, Qingdao University, China. His current research interests are mainly in the area of systems analysis and control. Lei Guo was born in 1966. He received the Ph.D. degree in Control Engineering from Southeast University (SEU), PR China, in 1997. From 1999 to 2004, he has worked at Hong Kong University, IRCCyN (France), Glasgow University, Loughborough University and UMIST, UK. Now he is a Professor in School of Instrument Science and Opto-Electronics Engineering, Beihang University. He also holds a Visiting Professor position in the University of Manchester, UK and an invitation fellowship in Okayama University, Japan. His research interests include robust control, stochastic systems, fault detection, filter design, and nonlinear control with their applications.  相似文献   

15.
Leakage current of CMOS circuit increases dramatically with the technology scaling down and has become a critical issue of high performance system. Subthreshold, gate and reverse biased junction band-to-band tunneling (BTBT) leakages are considered three main determinants of total leakage current. Up to now, how to accurately estimate leakage current of large-scale circuits within endurable time remains unsolved, even though accurate leakage models have been widely discussed. In this paper, the authors first dip into the stack effect of CMOS technology and propose a new simple gate-level leakage current model. Then, a table-lookup based total leakage current simulator is built up according to the model. To validate the simulator, accurate leakage current is simulated at circuit level using popular simulator HSPICE for comparison. Some further studies such as maximum leakage current estimation, minimum leakage current generation and a high-level average leakage current macromodel are introduced in detail. Experiments on ISCAS85 and ISCAS89 benchmarks demonstrate that the two proposed leakage current estimation methods are very accurate and efficient.  相似文献   

16.
It is very important to maintain the level of mean arterial pressure (MAP). The MAP control is applied in many clinical situations, including limiting bleeding during cardiac surgery and promoting healing for patient' s post-surgery. This paper presents a fuzzy controller-based multiple-model adaptive control system for postoperative blood pressure management. Multiple-model adaptive control (MMAC) algorithm is used to identify the patient model, and it is a feasible system identification method even in the presence of large noise. Fuzzy control (FC) method is used to design controller bank. Each fuzzy controller in the controller bank is in fact a nonlinear proportional-integral (PI) controller,whose proportional gain and integral gain are adjusted continuously according to error and rate of change of error of the plant output, resulting in better dynamic and stable control performance than the regular PI controller, especially when a nonlinear process is involved. For demonstration, a nonlinear, pulsatile-flow patient model is used for simulation, and the results show that the adaptive control system can effectively handle the changes in patient's dynamics and provide satisfactory performance in regulation of blood pressure of hypertension patients.  相似文献   

17.
Bounded Slice-line Grid (BSG) is an elegant representation of block placement, because it is very intuitionistic and has the advantage of handling various placement constraints. However, BSG has attracted little attention because its evaluation is very time-consuming. This paper proposes a simple algorithm independent of the BSG size to evaluate the BSG representation in O(nloglogn) time, where n is the number of blocks. In the algorithm, the BSG-rooms are assigned with integral coordinates firstly, and then a linear sorting algorithm is applied on the BSG-rooms where blocks are assigned to compute two block sequences, from which the block placement can be obtained in O(n log logn) time. As a consequence, the evaluation of the BSG is completed in O(nloglogn) time, where n is the number of blocks. The proposed algorithm is much faster than the previous graph-based O(n^2) algorithm. The experimental results demonstrate the efficiency of the algorithm.  相似文献   

18.
This paper introduces a new algorithm of mining association rules.The algorithm RP counts the itemsets with different sizes in the same pass of scanning over the database by dividing the database into m partitions.The total number of pa sses over the database is only(k 2m-2)/m,where k is the longest size in the itemsets.It is much less than k .  相似文献   

19.
A non-slicing approach,Corner Block List(CBL),has been presented recently.Since CBL only can represent floorplans without empty rooms,the algorithm based on CBL cannot get the optimum placement.In this paper,an extended corner block list,ECBLλ,is proposed.It can represent non-slicing floorplan including empty rooms.Based on the optimum solution theorem of BSG(bounded-sliceline grid),it is proved that the solution space of ECBLn,where n is the number of blocks,contains the optimum block placement with the minimum area.A placement algorithm based on ECBLλ,whose solution space can be controlled by setting λ,the extending ratio,is completed.Whenλ is set as n,the algorithm based on ECBLn is the optimum placement search algorithm.Experiments show that λ has a reasonable constant range for building block layout problem,so the algorithm can translate an ECBLλ representation to its corresponding placement in O(n) time,Experimental results on MCNC benchmarks show promising performance with 7% improvement in wire length and 2% decrease in dead space over algorthms based on CBL.Meanwhile,compared with other algorithms,the proposed algorithm can get better results with less runtime.  相似文献   

20.
As the amount of multimedia data is increasing day-by-day thanks to cheaper storage devices and increasing number of information sources, the machine learning algorithms are faced with large-sized datasets. When original data is huge in size small sample sizes are preferred for various applications. This is typically the case for multimedia applications. But using a simple random sample may not obtain satisfactory results because such a sample may not adequately represent the entire data set due to random fluctuations in the sampling process. The difficulty is particularly apparent when small sample sizes are needed. Fortunately the use of a good sampling set for training can improve the final results significantly. In KDD’03 we proposed EASE that outputs a sample based on its ‘closeness’ to the original sample. Reported results show that EASE outperforms simple random sampling (SRS). In this paper we propose EASIER that extends EASE in two ways. (1) EASE is a halving algorithm, i.e., to achieve the required sample ratio it starts from a suitable initial large sample and iteratively halves. EASIER, on the other hand, does away with the repeated halving by directly obtaining the required sample ratio in one iteration. (2) EASE was shown to work on IBM QUEST dataset which is a categorical count data set. EASIER, in addition, is shown to work on continuous data of images and audio features. We have successfully applied EASIER to image classification and audio event identification applications. Experimental results show that EASIER outperforms SRS significantly. Surong Wang received the B.E. and M.E. degree from the School of Information Engineering, University of Science and Technology Beijing, China, in 1999 and 2002 respectively. She is currently studying toward for the Ph.D. degree at the School of Computer Engineering, Nanyang Technological University, Singapore. Her research interests include multimedia data processing, image processing and content-based image retrieval. Manoranjan Dash obtained Ph.D. and M. Sc. (Computer Science) degrees from School of Computing, National University of Singapore. He has worked in academic and research institutes extensively and has published more than 30 research papers (mostly refereed) in various reputable machine learning and data mining journals, conference proceedings, and books. His research interests include machine learning and data mining, and their applications in bioinformatics, image processing, and GPU programming. Before joining School of Computer Engineering (SCE), Nanyang Technological University, Singapore, as Assistant Professor, he worked as a postdoctoral fellow in Northwestern University. He is a member of IEEE and ACM. He has served as program committee member of many conferences and he is in the editorial board of “International journal of Theoretical and Applied Computer Science.” Liang-Tien Chia received the B.S. and Ph.D. degrees from Loughborough University, in 1990 and 1994, respectively. He is an Associate Professor in the School of Computer Engineering, Nanyang Technological University, Singapore. He has recently been appointed as Head, Division of Computer Communications and he also holds the position of Director, Centre for Multimedia and Network Technology. His research interests include image/video processing & coding, multimodal data fusion, multimedia adaptation/transmission and multimedia over the Semantic Web. He has published over 80 research papers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号