首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   1篇
一般工业技术   1篇
自动化技术   8篇
  2022年   1篇
  2021年   1篇
  2018年   1篇
  2015年   1篇
  2011年   1篇
  2010年   1篇
  2008年   1篇
  2007年   1篇
  2005年   1篇
排序方式: 共有9条查询结果,搜索用时 15 毫秒
1
1.
A considerable number of applications are running over IP networks. This increased the contention on the network resource, which ultimately results in congestion. Active queue management (AQM) aims to reduce the serious consequences of network congestion in the router buffer and its negative effects on network performance. AQM methods implement different techniques in accordance with congestion indicators, such as queue length and average queue length. The performance of the network is evaluated using delay, loss, and throughput. The gap between congestion indicators and network performance measurements leads to the decline in network performance. In this study, delay and loss predictions are used as congestion indicators in a novel stochastic approach for AQM. The proposed method estimates the congestion in the router buffer and then uses the indicators to calculate the dropping probability, which is responsible for managing the router buffer. The experimental results, based on two sets of experiments, have shown that the proposed method outperformed the existing benchmark algorithms including RED, ERED and BLUE algorithms. For instance, in the first experiment, the proposed method resides in the third-place in terms of delay when compared to the benchmark algorithms. In addition, the proposed method outperformed the benchmark algorithms in terms of packet loss, packet dropping, and packet retransmission. Overall, the proposed method outperformed the benchmark algorithms because it preserves packet loss while maintaining reasonable queuing delay.  相似文献   
2.
Effective task scheduling is essential for obtaining high performance in heterogeneous distributed computing systems (HeDCSs). However, finding an effective task schedule in HeDCSs requires the consideration of both the heterogeneity of processors and high interprocessor communication overhead, which results from non-trivial data movement between tasks scheduled on different processors. In this paper, we present a new high-performance scheduling algorithm, called the longest dynamic critical path (LDCP) algorithm, for HeDCSs with a bounded number of processors. The LDCP algorithm is a list-based scheduling algorithm that uses a new attribute to efficiently select tasks for scheduling in HeDCSs. The efficient selection of tasks enables the LDCP algorithm to generate high-quality task schedules in a heterogeneous computing environment. The performance of the LDCP algorithm is compared to two of the best existing scheduling algorithms for HeDCSs: the HEFT and DLS algorithms. The comparison study shows that the LDCP algorithm outperforms the HEFT and DLS algorithms in terms of schedule length and speedup. Moreover, the improvement in performance obtained by the LDCP algorithm over the HEFT and DLS algorithms increases as the inter-task communication cost increases. Therefore, the LDCP algorithm provides a practical solution for scheduling parallel applications with high communication costs in HeDCSs.  相似文献   
3.
Cell image segmentation is a necessary first step of many automated biomedical image-processing procedures. There certainly has been much research in the area. To this, a new method has been added, which automatically extracts cells from microscopic imagery, and does so in two phases. Phase 1 uses iterated thresholding to identify and mark foreground objects or `blobs' with an overall accuracy of >97%. Phase 2 of the method uses a novel genetic algorithms-based ellipse detection algorithm to identify cells, quickly and reliably. The mechanism, as a whole, has an accuracy rate >96% and takes <1 min (given our specific hardware configuration) to operate on a microscopic image  相似文献   
4.
We propose an approach to image segmentation that views it as one of pixel classification using simple features defined over the local neighborhood. We use a support vector machine for pixel classification, making the approach automatically adaptable to a large number of image segmentation applications. Since our approach utilizes only local information for classification, both training and application of the image segmentor can be done on a distributed computing platform. This makes our approach scalable to larger images than the ones tested. This article describes the methodology in detail and tests it efficacy against 5 other comparable segmentation methods on 2 well‐known image segmentation databases. Hence, we present the results together with the analysis that support the following conclusions: (i) the approach is as effective, and often better than its studied competitors; (ii) the approach suffers from very little overfitting and hence generalizes well to unseen images; (iii) the trained image segmentation program can be run on a distributed computing environment, resulting in linear scalability characteristics. The overall message of this paper is that using a strong classifier with simple pixel‐centered features gives as good or better segmentation results than some sophisticated competitors and does so in a computationally scalable fashion.  相似文献   
5.
Requirements engineering (RE) is among the most valuable and critical processes in software development. The quality of this process significantly affects the success of a software project. An important step in RE is requirements elicitation, which involves collecting project-related requirements from different sources. Repositories of reusable requirements are typically important sources of an increasing number of reusable software requirements. However, the process of searching such repositories to collect valuable project-related requirements is time-consuming and difficult to perform accurately. Recommender systems have been widely recognized as an effective solution to such problem. Accordingly, this study proposes an effective hybrid content-based collaborative filtering recommendation approach. The proposed approach will support project stakeholders in mitigating the risk of missing requirements during requirements elicitation by identifying related requirements from software requirement repositories. The experimental results on the RALIC dataset demonstrate that the proposed approach considerably outperforms baseline collaborative filtering-based recommendation methods in terms of prediction accuracy and coverage in addition to mitigating the data sparsity and cold-start item problems.  相似文献   
6.
A multi-population genetic algorithm for robust and fast ellipse detection   总被引:2,自引:0,他引:2  
This paper discusses a novel and effective technique for extracting multiple ellipses from an image, using a genetic algorithm with multiple populations (MPGA). MPGA evolves a number of subpopulations in parallel, each of which is clustered around an actual or perceived ellipse in the target image. The technique uses both evolution and clustering to direct the search for ellipses—full or partial. MPGA is explained in detail, and compared with both the widely used randomized Hough transform (RHT) and the sharing genetic algorithm (SGA). In thorough and fair experimental tests, using both synthetic and real-world images, MPGA exhibits solid advantages over RHT and SGA in terms of accuracy of recognition—even in the presence of noise or/and multiple imperfect ellipses in an image—and speed of computation.  相似文献   
7.
This paper describes the latest version of a bi-objective multipopulation genetic algorithm (BMPGA) aiming to locate all global and local optima on a real-valued differentiable multimodal landscape. The performance of BMPGA is compared against four multimodal GAs on five multimodal functions. BMPGA is distinguished by its use of two separate but complementary fitness objectives designed to enhance the diversity of the overall population and exploration of the search space. This is coupled with a multipopulation and clustering scheme, which focuses selection within the various sub-populations and results in effective identification and retention of the optima of the target functions as well as improved exploitation within promising areas. The results of the empirical comparison provide clear evidence that supports the conclusion that BMPGA is better than the other GAs in terms of overall effectiveness, applicability, and reliability. The practical value of BMPGA has already been demonstrated in applications to multiple ellipses and elliptic objects detection in microscopic imagery.   相似文献   
8.
9.
Efficient task scheduling on heterogeneous distributed computing systems (HeDCSs) requires the consideration of the heterogeneity of processors and the inter-processor communication. This paper presents a two-phase algorithm, called H2GS, for task scheduling on HeDCSs. The first phase implements a heuristic list-based algorithm, called LDCP, to generate a high quality schedule. In the second phase, the LDCP-generated schedule is injected into the initial population of a customized genetic algorithm, called GAS, which proceeds to evolve shorter schedules. GAS employs a simple genome composed of a two-dimensional chromosome. A mapping procedure is developed which maps every possible genome to a valid schedule. Moreover, GAS uses customized operators that are designed for the scheduling problem to enable an efficient stochastic search. The performance of each phase of H2GS is compared to two leading scheduling algorithms, and H2GS outperforms both algorithms. The improvement in performance obtained by H2GS increases as the inter-task communication cost increases.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号