首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Reconfigurability is essential for semiconductor manufacturing systems to remain competitive. Reconfigurable systems avoid costly modifications required to change and adapt to changes in product, production and services. A fully automated, collaborative, and integrated while reconfigurable manufacturing system proves cost-effective in the long term and is a promising strategy for the semiconductor manufacturing industry. However, there is a lack of computing models to facilitate the design and development of control and management systems in a truly reconfigurable manner. This paper presents an innovative computing model for reconfigurable systems and controlled manufacturing processes while allowing for the integration of modern technologies to facilitate reconfiguration, such as radio frequency identification (RFID) and reconfigurable field programmable gate array (FPGA). Shop floor manufacturing activities are modeled as processes from a business perspective. A process-driven formal method that builds on prior research on virtual production lines is proposed for the formation of a reconfigurable cross-facility manufacturing system. The trajectory of the controlled manufacturing systems is optimized for on-demand production services. Reconfigurable process controllers are introduced in support of the essential system reconfigurability of future semiconductor manufacturing systems. Implementation of this approach is also presented.  相似文献   

2.
Wearable computers are embedded into the mobile environment of their users. A design challenge for wearable systems is to combine the high performance required for tasks such as video decoding with the low energy consumption required to maximise battery runtimes and the flexibility demanded by the dynamics of the environment and the applications. In this paper, we demonstrate that reconfigurable hardware technology is able to answer this challenge. We present the concept and the prototype implementation of an autonomous wearable unit with reconfigurable modules (WURM). We discuss experiments that show the uses of reconfigurable hardware in WURM: ASICs-on-demand and adaptive interfaces. Finally, we present an experiment with an operating system layer for WURM.  相似文献   

3.
Portable libraries of highly-optimized hardware cores can significantly reduce the development time of reconfigurable computing applications. This paper presents the tradeoffs and challenges in the design of such libraries. A set of library development guidelines is provided, which has been validated with the RCLib case study. RCLib is a set of portable libraries with over 100 cores, targeting a wide range of applications. RCLib portability has been verified in three major High-Performance reconfigurable computing architectures: SRC6, Cray XD1 and SGI RC100. Compared to full-software implementations, applications using RCLib hardware acceleration cores show speedups ranging from one to four orders of magnitude.  相似文献   

4.
Cellular computing architectures represent an important class of computation that are characterized by simple processing elements, local interconnect and massive parallelism. These architectures are a good match for many image and video processing applications and can be substantially accelerated with Reconfigurable Computers. We present a flexible software/hardware framework for design, implementation and automatic synthesis of cellular image processing algorithms. The system provides an extremely flexible set of parallel, pipelined and time-multiplexed components which can be tailored through reconfigurable hardware for particular applications. The most novel aspects of our framework include a highly pipelined architecture for multi-scale cellular image processing as well as support for several different pattern recognition applications. In this paper, we will describe the system in detail and present our performance assessments. The system achieved speed-up of at least 100× for computationally expensive sub-problems and 10× for end-to-end applications compared to software implementations.  相似文献   

5.
应用控制、管理和维护一体化的自动化技术,建立了基于多Agent的可重构制造系统RMS(Reconfigurable Manufacturing System)集成模型。该模型集成了基于多Agent 的RMS重构模型、控制模型和故障诊断模型,将RMS的控制、管理和维护联系起来,并给出了该模型的UML(Unified Modeling Langurage)活动图,最后举例验证了模型的可行性。  相似文献   

6.
This paper deals with a problem of reconfigurable manufacturing systems (RMSs) design based on products specifications and reconfigurable machines capabilities. A reconfigurable manufacturing environment includes machines, tools, system layout, etc. Moreover, the machine can be reconfigured to meet the changing needs in terms of capacity and functionality, which means that the same machine can be modified in order to perform different tasks depending on the offered axes of motion in each configuration and the availability of tools. This problem is related to the selection of candidate reconfigurable machines among an available set, which will be then used to carry out a certain product based on the product characteristics. The selection of the machines considers two main objectives respectively the minimization of the total cost (production cost, reconfiguration cost, tool changing cost and tool using cost) and the total completion time. An adapted version of the non- dominated sorting genetic algorithm (NSGA-II) is proposed to solve the problem. To demonstrate the effectiveness of the proposed approach on RMS design problem, a numerical example is presented and the obtained results are discussed with suggested future research.  相似文献   

7.
Abstract. This paper describes the design of a reconfigurable architecture for implementing image processing algorithms. This architecture is a pipeline of small identical processing elements that contain a programmable logic device (FPGA) and double port memories. This processing system has been adapted to accelerate the computation of differential algorithms. The log-polar vision selectively reduces the amount of data to be processed and simplifies several vision algorithms, making possible their implementation using few hardware resources. The reconfigurable architecture design has been devoted to implementation, and has been employed in an autonomous platform, which has power consumption, size and weight restrictions. Two different vision algorithms have been implemented in the reconfigurable pipeline, for which some experimental results are shown. Received: 30 March 2001 / Accepted: 11 February 2002 RID="*" ID="*" This work has been supported by the Ministerio de Ciencia y Tecnología and FEDER under project TIC2001-3546 Correspondence to: J.A. Boluda  相似文献   

8.
Short-term load forecasting (STLF) is one of the planning strategies adopted in the daily power system operation and control. All though many forecasting models have been developed through the years, the uncertainties present in the load profile significantly degrade the performance of these models. The uncertainties are mainly due to the sensitivity of the load demand with varying weather conditions, consumption pattern during month and day of the year. Therefore, the effect of these weather variables on the load consumption pattern is discussed. Based on the literature survey, artificial neural networks (ANN) models are found to be an alternative to classical statistical methods in terms of accuracy of the forecasted results. However, handling of bulk volumes of historical data and forecasting accuracy is still a major challenge. The development of third generation neural networks such as spike train models which are closer to their biological counterparts is recently emerging as a robust model. So, this paper presents a load forecasting system known as the SNNSTLF (spiking neural network short-term load forecaster). The proposed model has been tested on the database obtained from the Australian Energy Market Operator (AEMO) website for Victoria State.  相似文献   

9.
10.
This paper focuses on the design process for reconfigurable architecture. Our contribution focuses on introducing a new temporal partitioning algorithm. Our algorithm is based on typical mathematic flow to solve the temporal partitioning problem. This algorithm optimizes the transfer of data required between design partitions and the reconfiguration overhead. Results show that our algorithm considerably decreases the communication cost and the latency compared with other well known algorithms.  相似文献   

11.
Imitation is a powerful tool for gestural interaction between children and for teaching behaviors to children by parent. Furthermore, others’ action can be a hint for acquiring a new behavior that might not be the same as the original action. The importance is how to map or represent others’ action into new one in the internal state space. A good instructor can teach an action to a learner by understanding the mapping or imitating method of the learner. This indicates a robot also can acquire various behaviors using interactive learning based on imitation. This paper proposes structured learning for a partner robot based on the interactive teaching mechanism. The proposed method is composed of a spiking neural network, self-organizing map, steady-state genetic algorithm, and softmax action selection. Furthermore, we discuss the interactive learning of a human and a partner robot based on the proposed method through experiment results.  相似文献   

12.
针对深度神经网络为了追求准确度对计算资源造成的巨大消耗,与边缘计算平台所处的受限环境之间的矛盾,探究利用FPGA逻辑资源搭建神经网络张量处理器(TPU),通过配合ARM CPU实现全新的边缘计算架构,不仅实现对深度神经网络模型的加速计算以及准确度的提升,还对功耗进行明显优化.该架构下,压缩后的MobileNet-V1网...  相似文献   

13.
A novel neural network model is described that implements context-dependent learning of complex sequences. The model utilises leaky integrate-and-fire neurons to extract timing information from its input and modifies its weights using a learning rule with synaptic noise. Learning and recall phases are seamlessly integrated so that the network can gradually shift from learning to predicting its input. Experimental results using data from the real-world problem domain demonstrate that the use of context has three important benefits: (a) it prevents catastrophic interference during learning of multiple overlapping sequences, (b) it enables the completion of sequences from missing or noisy patterns, and (c) it provides a mechanism to selectively explore the space of learned sequences during free recall.  相似文献   

14.
A separation method for DNA computing based on concentration control is presented. The concentration control method was earlier developed and has enabled us to use DNA concentrations as input data and as filters to extract target DNA. We have also applied the method to the shortest path problems, and have shown the potential of concentration control to solve large-scale combinatorial optimization problems. However, it is still quite difficult to separate different DNA with the same length and to quantify individual DNA concentrations. To overcome these difficulties, we use DGGE and CDGE in this paper. We demonstrate that the proposed method enables us to separate different DNA with the same length efficiently, and we actually solve an instance of the shortest path problems. Masahito Yamamoto, Ph.D.: He is associate professor of information engineering at Hokkaido University. He received Ph.D. from the Graduate School of Engineering, Hokkaido University in 1996. His current research interests include DNA computing based the laboratory experiments. He is a member of Operations Research Society of Japan, Japanese Society for Artificial Intelligence, Information Processing Society of Japan etc. Atsushi Kameda, Ph.D.: He is the research staff of Japan Science and Technology Corporation, and has participated in research of DNA computing in Hokkaido University. He received his Ph.D. from Hokkaido University in 2001. For each degree he majored in molecular biology. His research theme is about the role of polyphosphate in the living body. As one of the researches relevant to it, he constructed the ATP regeneration system using two enzyme which makes polyphosphate the phosphagen. Nobuo Matsuura: He is a master course student of Division of Systems and Information Engineering of Hokkaido University. His research interests relate to DNA computing with concentration control for shortest path problems, as a means of solution of optimization problems with bimolecular. Toshikazu Shiba, Ph.D.: He is associate, professor of biochemical engineering at Hokkaido University. He received his Ph.D. from Osaka University in 1991. He majored in molecular genetics and biochemistry. His research has progressed from bacterial molecular biology (regulation of gene expression of bacterial cells) to tissue engineering (bone regeneration). Recently, he is very interested in molecular computation and trying to apply his biochemical idea to information technology. Yumi Kawazoe: She is a master course student of Division of Molecular Chemistry of Hokkaido University. Although her major is molecular biology, she is very interested in molecular computation and bioinformatics. Azuma Ohuchi, Ph.D.: He is professor of Information Engineering at the University of Hokkaido, Sapporo, Japan. He has been developing a new field of complex systems engineering, i.e., Harmonious Systems Engineering since 1995. He has published numerous papers on systems engineering, operations research, and computer science. In addition, he is currently supervising projects on DNA computing, multi-agents based artificial market systems, medical informatics, and autonomous flying objects. He was awarded “The 30th Anniversary Award for Excellent Papers” by the Information Processing Society of Japan. He is a member of Operations Research Society of Japan, Japanese Society for Artificial Intelligence, Information Processing Society of Japan, Japan Association for Medical Informatics, IEEE Computer Society, IEEE System, Man and Cybernetics Society etc. He received PhD from Hokkaido University in 1976.  相似文献   

15.
On one hand, multiple object detection approaches of Hough transform (HT) type and randomized HT type have been extended into an evidence accumulation featured general framework for problem solving, with five key mechanisms elaborated and several extensions of HT and RHT presented. On the other hand, another framework is proposed to integrate typical multi-learner based approaches for problem solving, particularly on Gaussian mixture based data clustering and local subspace learning, multi-sets mixture based object detection and motion estimation, and multi-agent coordinated problem solving. Typical learning algorithms, especially those that base on rival penalized competitive learning (RPCL) and Bayesian Ying-Yang (BYY) learning, are summarized from a unified perspective with new extensions. Furthermore, the two different frameworks are not only examined with one viewed crossly from a perspective of the other, with new insights and extensions, but also further unified into a general problem solving paradigm that consists of five basic mechanisms in terms of acquisition, allocation, amalgamation, admission, and affirmation, or shortly A5 paradigm.  相似文献   

16.
Blasting is an essential task in open-pit mines for rock fragmentation. However, its dangerous side effects need to be accurately estimated and controlled, especially ground vibration as measured in the form of peak particle velocity (PPV). The accuracy for estimating blast-induced PPV can be improved by hybrid artificial intelligence approach. In this study, a new hybrid model was developed based on Hierarchical K-means clustering (HKM) and Cubist algorithm (CA), code name HKM-CA model. The HKM clustering hybrid technique was used to separate data according to their characteristics. Subsequently, the Cubist model was trained and developed on the clusters generated by HKM. Empirical technique, the benchmark algorithms [random forest (RF), support vector machine (SVM), classification and regression tree (CART)], and single CA model were also established for benchmarking the HKM-CA model. Root-mean-square error (RMSE), determination coefficient (R2), and mean absolute error (MAE) were the key indicators used for evaluating the model performance. The results revealed that the proposed HKM-CA model was a powerful tool for improving the accuracy of the CA model. Specifically, the HKM-CA model yielded a superior result with an RMSE of 0.475, R2 of 0.995, and MAE of 0.373 in comparison to other models. The proposed HKM-CA model has the potential to be used for predicting blast-induced PPV on-site to control undesirable effects on the surrounding environment.  相似文献   

17.
In this paper, we introduce a block AA T-Lanczos bi-orthogonalization process. Based on this new process, the block bi-conjugate residual (Bl-BCR) method is derived, which is also a generalization of bi-conjugate residual method. In order to accelerate the rate of convergence, we generate a stabilized and more smoothly converging variant of Bl-BCR using formal matrix-valued orthogonal polynomials. Finally, numerical experiments illustrate the effectiveness of these block methods.  相似文献   

18.
As the demands for faster data processing and enterprise computing are increasing, the traditional client/server architecture has gradually been replaced by Grid computing or the peer-to-peer (P2P) model which can share the content or resources over the network. In this paper, a new computing architecture – computing power services (CPS) – has been applied to utilize web services and business process execution language for overcoming the issues about flexibility, compatibility and workflow management. CPS is a lightweight web services based computing power-sharing architecture, and suitable for enterprise computing tasks which can be executed in the batch processes within a trusty network. However, a real-time load balance and dispatching mechanism is needed for distributed-computing architecture like CPS in order to handle computing resources efficiently and properly. Therefore, a fuzzy group decision-making based adaptive collaboration design for CPS is proposed in this paper to provide the real-time computation coordination and quality of service. In this study, the approach has been applied to analyze the robustness of digital watermark by filter bank selection and the performance can be improved in the aspect of speedup, stability and processing time. This scheme increases the overall computing performance and shows stability for the dynamic environment.  相似文献   

19.
This paper proposes a high capacity data hiding scheme for binary images based on block patterns, which can facilitate the authentication and annotation of scanned images. The scheme proposes block patterns for a 2 × 2 block to enforce specific block-based relationship in order to embed a significant amount of data without causing noticeable artifacts. In addition, two kinds of matching pair (MP) methods, internal adjustment MP and external adjustment MP, are designed to decrease the embedding changes. Shuffling is applied before embedding to reduce the distortion and improve the security. Experimental results show that the proposed scheme gives a significantly improved embedding capacity than previous approaches in the same level of embedding distortion. We also analyze the perceptual impact and discuss the robustness and security issues.  相似文献   

20.

Background

One of the emerging techniques for performing the analysis of the DNA microarray data known as biclustering is the search of subsets of genes and conditions which are coherently expressed. These subgroups provide clues about the main biological processes. Until now, different approaches to this problem have been proposed. Most of them use the mean squared residue as quality measure but relevant and interesting patterns can not be detected such as shifting, or scaling patterns. Furthermore, recent papers show that there exist new coherence patterns involved in different kinds of cancer and tumors such as inverse relationships between genes which can not be captured.

Results

The proposed measure is called Spearman's biclustering measure (SBM) which performs an estimation of the quality of a bicluster based on the non-linear correlation among genes and conditions simultaneously. The search of biclusters is performed by using a evolutionary technique called estimation of distribution algorithms which uses the SBM measure as fitness function. This approach has been examined from different points of view by using artificial and real microarrays. The assessment process has involved the use of quality indexes, a set of bicluster patterns of reference including new patterns and a set of statistical tests. It has been also examined the performance using real microarrays and comparing to different algorithmic approaches such as Bimax, CC, OPSM, Plaid and xMotifs.

Conclusions

SBM shows several advantages such as the ability to recognize more complex coherence patterns such as shifting, scaling and inversion and the capability to selectively marginalize genes and conditions depending on the statistical significance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号