首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
Polymer based microfabrication technologies are used extremely in Bio-MEMS, especially in Microfluidic devices in recent years. In this paper, a novel method for fabrication of microstructures on a polymeric material using hot embossing lithography process is presented. The proposed method involves usage of low cost materials and procedure with respect to previous methods and can be processed in a short time. The master is made from SU-8 on an inexpensive glass substrate which is patterned by standard lithography. The embossing pressure can be increased in our master as the glass substrate used in this paper is more robust than Silicon. Master robustness and SU-8 to glass adhesion is optimized by some substrate pretreatments and SU-8 baking time and temperatures. Microchannels are replicated on a Polymethylmetacrylate (PMMA) stamp which is a plexiglass sheet with thickness of 1 mm. Significant embossing parameters including temperature, pressure and time are discussed and optimum values are determined. Microchannels are imprinted by depth of 50 μm and minimum width of 15 μm and aspect ratio more than 3. The microchannels are sealed by a PMMA cap using thermal annealing bonding.  相似文献   

2.
Soft-UV-NIL as replication technique was used to replicate sub-100 nm structures. The aim of this work is the stamp production and the replication of structures with dimensions smaller than 100 nm in a simple manner. Composite stamps composed of two layers, a thin hard PDMS layer supported by a thick soft PDMS (s-PDMS) layer are compared to common s-PDMS stamps regarding the resolution by using a Siemens star (star burst pattern) as test structure. The master is fabricated by electron beam lithography in a 140 nm thick PMMA resist layer. The stamp is molded directly from the structured resist, without any additional anti sticking treatment. Therefore the resist thickness determines the aspect ratio, which is 1.5 at the resolution limit. The replication is done in a UV-curing cycloaliphatic epoxy material. The employed test structure provides good comparability, the resolution limit at a glance, and it integrates a smooth transition from micro- to nanostructures. Therefore it is a capable structure to characterize the UV-NIL.  相似文献   

3.
A second generation proton beam writing (PBW) system has been built at the Centre for Ion Beam Applications at the National University of Singapore for fabrication of high aspect ratio 3D nano lithographic structures. System improvements and a few lithographic structures obtained with this facility are presented in this paper. Through accurate alignment of the magnetic quadrupole lenses and the electrostatic scanning system, orthogonal beam scanning has been achieved. The earlier constrain of limited beam scan area has been overcome by adopting a combination of beam and stage scanning as well as stitching. With these improvements smallest ever Ni structure of 65 nm in width has been fabricated using nickel electroplating on a proton beam written PMMA sample in the second generation PBW facility. Using this improved PBW facility, we have also demonstrated the fabrication of fine lithographic patterns with 19 nm line width and 60 nm spacing in 100 nm thick negative high resolution hydrogen silsesquioxane resist. Future possible system improvements leading to finer resolution will be discussed briefly.  相似文献   

4.
This paper presents some benchmark timings from an optimising Prolog compiler using global analysis for a RISC workstation, the MIPS R2030. These results are extremely promising. For example, the infamous naive reverse benchmark runs at 2 mega LIPS. We compare these timings with those for other Prolog implementations running on the same workstation and with published timings for the KCM, a recent piece of special purpose Prolog hardware. The comparison suggests that global analysis is a fruitful source of information for an optimising Prolog compiler and that the performance of special purpose Prolog hardware can be at least matched by the code from a compiler using such information. We include some analysis of the sources of the improvement global analysis yields. An overview of the compiler is given and some implementation issues are discussed. This paper is an extended version of Ref. 15)  相似文献   

5.
The Andorra model is a parallel execution model of logic programs which exploits the dependent and-parallelism and or-parallelism inherent in logic programming. We present a flat subset of a language based on the Andorra model, henceforth called Andorra Prolog, that is intended to subsume both Prolog and the committed choice languages. Flat Andorra, in addition todon’t know anddon’t care nondeterminism, supports control of or-parallel split, synchronisation on variables, and selection of clauses. We show the operational semantics of the language, and its applicability in the domain of committed choice languages. As an examples of the expressiveness of the language, we describe a method for communication between objects by time-stamped messages, which is suitable for expressing distributed discrete event simulation applications. This method depends critically on the ability to expressdon’t know nondeterminism and thus cannot easily be expressed in a committed choice language.  相似文献   

6.
7.
8.
Inductive logic programming   总被引:3,自引:0,他引:3  
A new research area, Inductive Logic Programming, is presently emerging. While inheriting various positive characteristics of the parent subjects of Logic Programming and Machine Learning, it is hoped that the new area will overcome many of the limitations of its forebears. The background to present developments within this area is discussed and various goals and aspirations for the increasing body of researchers are identified. Inductive Logic Programming needs to be based on sound principles from both Logic and Statistics. On the side of statistical justification of hypotheses we discuss the possible relationship between Algorithmic Complexity theory and Probably-Approximately-Correct (PAC) Learning. In terms of logic we provide a unifying framework for Muggleton and Buntine’s Inverse Resolution (IR) and Plotkin’s Relative Least General Generalisation (RLGG) by rederiving RLGG in terms of IR. This leads to a discussion of the feasibility of extending the RLGG framework to allow for the invention of new predicates, previously discussed only within the context of IR.  相似文献   

9.
This paper proposes a distributed address configuration scheme for a Mobile Ad hoc Network (MANET). The architecture for a MANET, the algorithm of constructing a MANET, and the algorithm for calculating a unique address space for the assignment are proposed. In the architecture, a common node has a unique address space for the assignment, and an isolated node can acquire a unique address from a neighboring common node without performing duplicate address detection. In this way, the address configuration task is distributed around common nodes. In this scheme, the control packets used for address configuration are exchanged within a one-hop scope, so the scalability is enhanced. This paper also presents an address recovery algorithm that can effectively retrieve the address resources released by failed nodes and the MANET merging/partitioning algorithm that can ensure a node’s address uniqueness. This paper compares the performance parameters of the proposed scheme and the existing schemes, including strong duplicate address detection and prime dynamic host configuration protocol, and the comparative results show that the address configuration cost of the proposed scheme is lower and the delay is shorter.  相似文献   

10.
Multicore processors can provide sufficient computing power and flexibility for complex streaming applications, such as high-definition video processing. For less hardware complexity and power consumption, the distributed scratchpad memory architecture is considered, instead of the cache memory architecture. However, the distributed design poses new challenges to programming. It is difficult to exploit all available capabilities and achieve maximal throughput, due to the combined complexity of inter-processor communication, synchronization, and workload balancing. In this study, we developed an efficient design flow for parallelizing multimedia applications on a distributed scratchpad memory multicore architecture. An application is first partitioned into streaming components and then mapped onto multicore processors. Various hardware-dependent factors and application-specific characteristics are involved in generating efficient task partitions and allocating resources appropriately. To test and verify the proposed design flow, three popular multimedia applications were implemented: a full-HD motion JPEG decoder, an object detector, and a full-HD H.264/AVC decoder. For demonstration purposes, SONY PlayStation \(^{\circledR }\) 3 was selected as the target platform. Simulation results show that, on PS3, the full-HD motion JPEG decoder with the proposed design flow can decode about 108.9 frames per second (fps) in the 1080p format. The object detection application can perform real-time object detection at 2.84 fps at \(1280 \times 960\) resolution, 11.75 fps at \(640 \times 480\) resolution, and 62.52 fps at \(320 \times 240\) resolution. The full-HD H.264/AVC decoder applications can achieve nearly 50 fps.  相似文献   

11.
The Artificial Reaction Network (ARN) is a Cell Signalling Network inspired connectionist representation belonging to the branch of A-Life known as Artificial Chemistry. Its purpose is to represent chemical circuitry and to explore computational properties responsible for generating emergent high-level behaviour associated with cells. In this paper, the computational mechanisms involved in pattern recognition and spatio-temporal pattern generation are examined in robotic control tasks. The results show that the ARN has application in limbed robotic control and computational functionality in common with Artificial Neural Networks. Like spiking neural models, the ARN can combine pattern recognition and complex temporal control functionality in a single network, however it offers increased flexibility. Furthermore, the results illustrate parallels between emergent neural and cell intelligence.  相似文献   

12.
CARMEL-2 is a high performance VLSI uniprocessor, tuned forFlat Concurrent Prolog (FCP). CARMEL-2 shows almost 5-fold speedup over its predecessor, CARMEL-1, and it achieves 2,400 KLIPS executingappend. This high execution rate was gained as a result of an optimized design, based on an extensive architecture-oriented execution analysis of FCP, and the lessons learned with CARMEL-1. CARMEL-2 is a RISC processor in its character and performance. The instruction set includes only 29 carefully selected instructions. The 10 special instructions, the prudent implementation and pipeline scheme, as well as sophisticated mechanisms such as intelligent dereference, distinguish CARMEL-2 as a RISC processor for FCP.  相似文献   

13.
This article presents two new algorithms for finding the optimal solution of a Multi-agent Multi-objective Reinforcement Learning problem. Both algorithms make use of the concepts of modularization and acceleration by a heuristic function applied in standard Reinforcement Learning algorithms to simplify and speed up the learning process of an agent that learns in a multi-agent multi-objective environment. In order to verify performance of the proposed algorithms, we considered a predator-prey environment in which the learning agent plays the role of prey that must escape the pursuing predator while reaching for food in a fixed location. The results show that combining modularization and acceleration using a heuristics function indeed produced simplification and speeding up of the learning process in a complex problem when comparing with algorithms that do not make use of acceleration or modularization techniques, such as Q-Learning and Minimax-Q.  相似文献   

14.
Cloud computing is a more advanced technology for distributed processing, e.g., a thin client and grid computing, which is implemented by means of virtualization technology for servers and storages, and advanced network functionalities. However, this technology has certain disadvantages such as monotonous routing for attacks, easy attack method, and tools. This means that all network resources and operations are blocked all at once in the worst case. Various studies such as pattern analyses and network-based access control for infringement response based on Infrastructure as a Service, Platform as a Service and Software as a Service in cloud computing services have therefore been recently conducted. This study proposes a method of integration between HTTP GET flooding among Distributed Denial-of-Service attacks and MapReduce processing for fast attack detection in a cloud computing environment. In addition, experiments on the processing time were conducted to compare the performance with a pattern detection of the attack features using Snort detection based on HTTP packet patterns and log data from a Web server. The experimental results show that the proposed method is better than Snort detection because the processing time of the former is shorter with increasing congestion.  相似文献   

15.
We consider NP-hard integer-valued multiindex problems of transportation type. We distinguish a subclass of polynomially solvable multiindex problems, namely multiindex problems with decomposition structure. We construct a general scheme for a heuristic method to solve a number of similar NP-hard decompositional multiindex problems. For one version of implementation for this scheme, we estimate its deviation from the optimum. We illustrate our results with the example of designing a class schedule.  相似文献   

16.
Low fertility and rapid out-migration in Romania are consequential for the migrants that confront challenges of providing support to ageing parents. Systematic data allowing examination of intergenerational support are difficult to find for Eastern Europe, a region undergoing demographic and socio-economic transition. Using recently collected data from Romania this study models monetary and instrumental support from an adult child to an older parent as a function of location of residence and additional covariates that assume Romanian families operate following an integrative family framework wherein support obligations are considered to be shared across a family network and support probabilities depend upon characteristics of the provider and the older parent. Multilevel multinomial models with random intercepts indicate international migrants are likely to give money; within Romania migrants and those living in the same locality as parents are unlikely to give money but likely to provide instrumental support. But, specific probabilities vary depending having sibling and where siblings live. Support is more likely provided to rural parents and to parents with functional limitations. Results elucidate the degree to which and why support is being provided within a rapidly ageing environment.  相似文献   

17.
We present a logic programming language, GCLA*** (Generalized horn Clause LAnguage), that is based on a generalization of Prolog. This generalization is unusual in that it takes a quite different view of the meaning of a logic program—a “definitional” view rather than the traditional logical view. GCLA has a number of noteworthy properties, for instance hypothetical and non-monotonic reasoning. This makes implementation of reasoning in knowledge-based systems more direct in GCLA than in Prolog. GCLA is also general enough to incorporate functional programming as a special case. GCLA and its syntax and semantics are described. The use of various language constructs are illustrated with several examples.  相似文献   

18.
With the rapid growth of the video surveillance applications, the storage energy consumption of video surveillance is more noticeable, but existed energy-saving methods for massive storage system most concentrate on the data centers mainly with random accesses. The storage of video surveillance has inherent access pattern, and requires special energy-saving approach to save more energy. An energy-efficient data layout for video surveillance, Semi-RAID is proposed. It adopts partial-parallelism strategy, which partitions disk data into different groups, and implements parallel accesses in each group. Grouping benefits to realize only partial disks working and the rest ones idle, and inner-group parallelism provides the performance guarantee. In addition, greedy strategy for address allocation is adopted to effectively prolong the idle period of the disks; particular Cache strategies are used to filter the small amount of random accesses. The energy-saving efficiency of Semi-RAID is verified by a simulated video surveillance consisting of 32 cameras with D1 resolution. The experiment shows: Semi-RAID can save 45 % energy than Hibernator; 80 % energy than PARAID; 33 % energy than MAID; 79 % energy than eRAID-5, while providing single disk fault tolerance and meeting the performance requirement, such as throughput.  相似文献   

19.
The debuggers of Ref. 11) and most of their derivatives are of themeta-interpreter type. The computation of the debugger tracks the computation of the program to be diagnosed at the level of procedure call. This is adequate if the intuitive understanding of the programmer is in terms of procedure calls; as is indeed the case in Prolog. InLDL however, while the semantics of the language are described in a bottom-up, fixpoint model of computation,8) theactual execution of a program is a complex sequence of low-level procedure calls determined (and optimized) by the compiler. Consequently, a trace of these procedure calls is of little use to the programmer. Further, one cannot “execute” anLDL program as if it was a Prolog program; the program may simply not terminate in its Prolog reading and severalLDL constructs have no obvious Prolog counterparts. We identify the origin of a fault in theLDL program by a top-down, query/subquery approach. The basic debugger, implemented in Prolog, is a shell program between the programmer and theLDL program: it poses queries and uses the results to drive the interaction with the user. It closely resembles the one presented in Ref. 11). The core of a more sophisticated debugger is presented as well. Several concepts are introduced in order to quantify debugging abilities. One is that of agenerated interpretation, in which the structureless intended interpretation of Ref. 11) is augmented with causality. Another is the (idealized) concept of areliable oracle. We show that given an incorrect program and a reliable oracle which uses a generated interpretation, a cause for the fault will be found in finitely many steps. This result carries over to the more sophisticated debugger.  相似文献   

20.
This paper describes two research projects, both involving Latin and Greek lexicography. They are undertaken at the University of Florence and at the Italian National Research Council respectively. The one involves the creation of a Dictionary of Justinian's constitutions based on the emperor's legislative lexicon formed in the Corpus Iuris and elsewhere. The most demanding aspect of this task has been the creation of the Dictionary of the Novellae. The other project involves the creation of a Lexicon of the Novellae in the Authenticum version.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号