This paper addresses a fundamental trade-off in dynamic scheduling between the cost of scheduling and the quality of the resulting schedules. The time allocated to scheduling must be controlled explicitly, in order to obtain good-quality schedules in reasonable times. As task constraints are relaxed, the algorithms proposed in this paper increase scheduling complexity to optimize longer and obtain high-quality schedules. When task constraints are tightened, the algorithms adjust scheduling complexity to reduce the adverse effect of long scheduling times on the schedule quality. We show that taking into account the scheduling time is crucial for honoring the deadlines of scheduled tasks. We investigate the performance of our algorithms in two scheduling models: one that allows idle-time intervals to exist in the schedule and another that does not. The model with idle-time intervals has important implications for dynamic scheduling which are discussed in the paper. Experimental evaluation of the proposed algorithms shows that our algorithms outperform other candidate algorithms in several parameter configurations. 相似文献
In this paper a new method for handling occlusion in face recognition is presented. In this method the faces are partitioned
into blocks and a sequential recognition structure is developed. Then, a spatial attention control strategy over the blocks
is learned using reinforcement learning. The outcome of this learning is a sorted list of blocks according to their average
importance in the face recognition task. In the recall mode, the sorted blocks are employed sequentially until a confident
decision is made. Obtained results of various experiments on the AR face database demonstrate the superior performance of
proposed method as compared with that of the holistic approach in the recognition of occluded faces. 相似文献
In this paper, a new algorithm for generating more-randomized keys for symmetrical cipher one-time pad (OTP) according to the linear congruential (LCG) method based on the idea of genetic algorithm is proposed. The method, genetic-based random key generator, is proposed for generating keys for the OTP method with a high degree of key randomness; this adds more strength to the OTP method against breaking this cryptosystem. This algorithm is composed of two parts. Initially, the first population is being generated by LCG method, and then, genetic operators for generating the next populations are being used. Generating random keys with the presented method requires seven-parameter key that increases the security of communication between the transceivers. 相似文献
In this paper, we propose a new online identification approach for evolving Takagi–Sugeno (TS) fuzzy models. Here, for a TS model, a certain number of models as neighboring models are defined and then the TS model switches to one of them at each stage of evolving. We define neighboring models for an in-progress (current) TS model as its fairly evolved versions, which are different with it just in two fuzzy rules. To generate neighboring models for the current model, we apply specially designed split and merge operations. By each split operation, a fuzzy rule is replaced with two rules; while by each merge operation, two fuzzy rules combine to one rule. Among neighboring models, the one with the minimum sum of squared errors – on certain time intervals – replaces the current model.To reduce the computational load of the proposed evolving TS model, straightforward relations between outputs of neighboring models and that of current model are established. Also, to reduce the number of rules, we define and use first-order TS fuzzy models whose generated local linear models can be localized in flexible fuzzy subspaces. To demonstrate the improved performance of the proposed identification approach, the efficiency of the evolving TS model is studied in prediction of monthly sunspot number and forecast of daily electrical power consumption. The prediction and modeling results are compared with that of some important existing evolving fuzzy systems. 相似文献
Multimedia Tools and Applications - In this paper, a novel chaos-based dynamic encryption scheme with a permutation-substitution structure is presented. The S-boxes and P-boxes of the scheme are... 相似文献
In the current paper, we propose a new online search, fault detection, and fault location approach for short faults in network on chip communication channels. The approach proposed consists of a built-in self-test as well as a packet/flit comparings module embedded in the network adapter and a router, respectively. The approach is mainly characterized by the fact that, firstly, the diagnosis and location processes are simultaneously carried out after which the test time is minimized. Secondly, the approach updates the NoC routing tables far less costly in a parallel fashion. Thirdly, insignificant hardware is added to the system. The high scalability in the approach, in addition, leads to 100% test coverage, 71.4% capability of detecting faulty channels, and 100% detected faults location in one round (two phases). The simulation results show that the approach hardware is optimized compared with the previous methodologies. 相似文献
Spectrum-based fault localization (SFL) techniques have shown considerable effectiveness in localizing software faults. They leverage a ranking metric to automatically assign suspiciousness scores to certain entities in a given faulty program. However, for some programs, the current SFL ranking metrics lose effectiveness. In this paper, we introduce ConsilientSFL that is served to synthesize a new ranking metric for a given program, based on a customized combination of a set of given ranking metrics. ConsilientSFL can be significant since it demonstrates the usage of voting systems into a software engineering task. First, several mutated, faulty versions are generated for a program. Then, the mutated versions are executed with the test data. Next, the effectiveness of each existing ranking metric is computed for each mutated version. After that, for each mutated version, the computed existing metrics are ranked using a preferential voting system. Consequently, several top metrics are chosen based on their ranks across all mutated versions. Finally, the chosen ranking metrics are normalized and synthesized, yielding a new ranking metric. To evaluate ConsilientSFL, we have conducted experiments on 27 subject programs from Code4Bench and Siemens benchmarks. In the experiments, we found that ConsilientSFL outperformed every single ranking metric. In particular, for all programs on average, we have found performance measures recall, precision, f-measure, and percentage of code inspection, to be nearly 7, 9, 12, and 5 percentages larger than using single metrics, respectively. The impact of this work is twofold. First, it can mitigate the issue with the choice and usage of a proper ranking metric for the faulty program at hand. Second, it can help debuggers find more faults with less time and effort, yielding higher quality software.
Clustering is a popular data analysis and data mining technique. A popular technique for clustering is based on k-means such that the data is partitioned into K clusters. However, the k-means algorithm highly depends on the initial state and converges to local optimum solution. This paper presents a new hybrid evolutionary algorithm to solve nonlinear partitional clustering problem. The proposed hybrid evolutionary algorithm is the combination of FAPSO (fuzzy adaptive particle swarm optimization), ACO (ant colony optimization) and k-means algorithms, called FAPSO-ACO–K, which can find better cluster partition. The performance of the proposed algorithm is evaluated through several benchmark data sets. The simulation results show that the performance of the proposed algorithm is better than other algorithms such as PSO, ACO, simulated annealing (SA), combination of PSO and SA (PSO–SA), combination of ACO and SA (ACO–SA), combination of PSO and ACO (PSO–ACO), genetic algorithm (GA), Tabu search (TS), honey bee mating optimization (HBMO) and k-means for partitional clustering problem. 相似文献
Pose retrieval of a rigid object from monocular video sequences or images is addressed. Initially, the object pose is estimated in each image assuming flat depth maps. Shape-from-silhouette is then applied to make a 3-D model (volume), which is used for a new round of pose estimations, this time by a model-based method that gives better estimates. Before repeating this process by building a new volume, pose estimates are adjusted to reduce error by maximizing a novel quality factor for shape-from-silhouette volume reconstruction. The feedback loop is terminated when pose estimates do not change much, as compared with those produced by the previous iteration. Based on a theoretical study of the proposed system, a test of convergence to a given set of poses is devised. Reliable performance of the system is also proved by several experiments on both synthetic and real image sequences. No model is assumed for the object and no feature point is detected or tracked as there is no problematic feature matching or correspondence. Our method can be used for 3-D object tracking in video, 3-D modeling, and volume reconstruction from video. 相似文献