首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The automated physical mapping system described here is currently being used to complete physical maps of both the human and the mouse genomes. The originally targeted 5,000 screenings per day has been surpassed with about 18,000 daily screenings on a routine basis. Thus far, 3,500 STSs have been mapped and the remaining 6,500 required to make the first low-density STS physical map are likely to be completed very early in 1995. Many of the biochemical and automation techniques used for this physical mapping project may also have applications in other areas such as DNA genotyping, and DNA diagnostics. For example, it may be possible to screen large populations for particular genetic diseases (such as cystic fibrosis and a propensity for certain cancers) before they exhibit any symptoms, thereby allowing potential treatments to start as early as possible  相似文献   

2.
A software tool for learning about stochastic models   总被引:1,自引:0,他引:1  
The Symbolic Hierarchical Automated Reliability/Performance Evaluator (SHARPE), a software system that analyzes stochastic models, is discussed. SHARPE allows students to set up and solve a variety of model types, to compare results for different models of the same system, to see how altering system parameters affects measures of effectiveness of the system, and to experiment with modeling techniques, including the use of exact and approximate system or model decomposition. It can also be used to illustrate problems of large state spaces and stiff systems and to provide examples of methods for avoiding these problems. Using SHARPE, one can specify and analyze the following model types separately or in combination: fault trees, reliability block diagrams, reliability graphs, product-form queuing networks, series-parallel acyclic directed graphs, Markov and semi-Markov chains, and generalized stochastic Petri nets  相似文献   

3.
The Whitehead/MIT Center for Genome Research is engaged in several high-throughput genome-mapping projects, including physical mapping of the human genome and genetic-linkage mapping of the mouse genome. The scale and complexity of the laboratory workflows for these projects make a laboratory information system a necessity. For example, the human physical mapping project has performed over 1.2 million experimental steps to date, many of which involve numerous biochemical assays. The result is a physical map containing over 13 thousand sequence-tagged-site markers, with a near-term goal of 15 thousand markers. These markers will likely be used as a starting point for producing a higher resolution map, which in turn will provide the raw material for sequencing the human genome. LabBase is a database management system (DBMS) tailored to the needs of laboratory information systems that support such projects. It is designed to make it easy to keep track of laboratory samples, the experimental steps performed on them, and the results of these experiments. LabBase is functionally specialised because it provides special support for the requirements of managing laboratory data over and above what would be provided by a generic, off-the-shelf DBMS  相似文献   

4.
The original goals of the Human Genome Project (HGP) were: 1) construction of a high-resolution genetic map of the human genome: 2) production of a variety of physical maps of all human chromosomes and of the DNA of selected model organisms; 3) determination of the complete sequence of human DNA and of the DNA of selected model organisms; 4) development of capabilities for collecting, storing, distributing, and analyzing the data produced; and 5) creation of appropriate technologies necessary to achieve these objectives. Here, the authors assert that the most pressing information-infrastructure requirement now facing the HGP is achieving better interoperation among electronic information resources. Other needs may be equally important (better methods to support large-scale sequencing and mapping, for example), but none are as pressing. The problem of interoperability grows exponentially with the data. Efforts to develop distributed information publishing systems are now underway in many locations. If the needs of the genome project are not soon defined and articulated, they will not be addressed by these external projects. De facto standards will emerge and if these prove inadequate for scientific data publishing, the research community will have little choice but to tolerate this inadequacy indefinitely  相似文献   

5.
In the tableau approach to large-electrical-network analysis, as well as in structure analysis, the finite-element method, linear programming etc., a very sparse linear algebraic set of equations Ax = b has to be solved repeatedly. To efficiently solve the system via Gaussian elimination, an optimization problem has to be faced: the selection of a pivot strategy to maintain the sparsity of the matrix A . It is possible also to follow a different strategy to fully exploit the sparsity of A , i.e. to transform A into an equivalent but more convenient form. Both of these problems have been studied and partially solved by means of directed graphs associated with A when symmetric permutations on A are allowed. In this paper, a graph theoretical interpretation has been given to nonsymmetric permutations on A , which can be considered a fundamental step towards the solution of the above-mentioned optimization problems. This interpretation is obtained through decomposition theorems on nonsymmetric permutations, correspondence theorems between column (row) permutations and topological operations on a directed graph representing A .  相似文献   

6.
无地面控制点的无人机摄影测量可以有效提高生产效率,降低生产成本,在人无法到达的地区展现出很大的优势,但是该方法也存在的一系列问题。一是通过在任检校的方法无法获取准确的相机参数,不准确的相机主距会严重影响目标点的高程。二是在地图投影坐标系下进行测绘任务时,投影变形和地球曲率对高程精度也有影响。因此,本文通过分析高程误差产生的具体原因,实现了在带有架构航线的地心坐标系中相机的自校准,然后,在不带架构航线的地图投影坐标系中获得影像的外方位元素,最后,对地图投影变形和地球曲率引起的高程误差进行了校正。实验结果表明,两组数据的高程均方根误差分别从0.298 m和0.374 m降低到了0.075 m和0.080 m,高程精度均优于0.1 m,因此本文的方法可以在无地面控制点的地图投影坐标系内实现精确的地图绘制。  相似文献   

7.
Pursuing optimal solutions for large scale transmission network planning problems is a formidable task due to their combinatorial nature and also due to the nonconvexities involved. Successful approaches using hierarchical Benders decomposition incur in a high computational cost mainly due to the need to solve a large integer program (the investment sub-problem) for every Benders iteration. In this work the authors propose to use heuristics within the decomposition framework, therefore avoiding to solve to optimality each integer sub-problem. The global computational effort is substantially reduced, and allows coping with large problems that would be intractable using classical combinatorial techniques. Case studies with the 6 bus Garver test system and a reduced Southeastern Brazilian power network are presented and discussed  相似文献   

8.
针对室外大场景环境建图精度不高,地图出现重影和漂移等问题,提出一种融合滤波与图优化理论实时定位与建图系统。该系统由点云数据预处理、基于滤波紧耦合惯性里程计和后端位姿图优化等三部分构成。首先,点云数据预处理采用随机采样一致性算法分割地面,并提取地面模型参数构建后端优化中的地面约束因子。然后,前端紧耦合惯性里程计采用迭代误差状态卡尔曼滤波,以激光里程计作为观测值,IMU预积分结果作为预测值,通过构建联合函数,滤波融合得到较为精准的激光惯导里程计。最后,后端结合图优化理论引入闭环因子、地面约束因子以及帧与图匹配的里程计因子作为约束条件,构建因子图并优化地图位姿。其中闭环因子采用改进的扫描文本的闭环检测算法进行位置识别,可以降低环境误识别率。本文提出的算法在室外厂区楼栋,停车场以及室内车间等多个场景完成场景建图,在距离,水平和高程三个方向的累积偏差均控制10厘米左右,能够有效解决地图的重影和漂移问题,具有高鲁棒性和高精度。  相似文献   

9.
Hopfield has shown that the combinatorial optimization problem can be solved on an artificial neural network system by minimizing the quadratic energy function. One of the difficulties in applying the network to actual problems is that the network converges to local minimum solutions very slowly because the sigmoid function is used for an input-output function of the neuron. To overcome this difficulty, this paper proposes an accelerated Hopfield neural network which can control the speed of convergence near the local minima by an acceleration parameter. Computational results for the combinatorial problems with two and twenty-five variables show that: (1) the proposed model converges to the local minima more quickly than the conventional model; (2) that the acceleration of convergence makes the attraction region of each local minimum change and worsens the accuracy of the solution; and (3) that if an initial point is selected around the center of the unit hypercube, the proposed network converges to a local minimum very quickly with high accuracy, and these good properties remain unchanged by the acceleration parameter.  相似文献   

10.
Tabu search algorithm for network synthesis   总被引:1,自引:0,他引:1  
Large scale combinatorial problems such as the network expansion problem present an amazingly high number of alternative configurations with practically the same investment, but with substantially different structures (configurations obtained with different sets of circuit/transformer additions). The proposed parallel tabu search algorithm has shown to be effective in exploring this type of optimization landscape. The algorithm is a third generation tabu search procedure with several advanced features. This is the most comprehensive combinatorial optimization technique available for treating difficult problems such as the transmission expansion planning. The method includes features of a variety of other approaches such as heuristic search, simulated annealing and genetic algorithms. In all test cases studied there are new generation, load sites which can be connected to an existing main network: such connections may require more than one line, transformer addition, which makes the problem harder in the sense that more combinations have to be considered  相似文献   

11.
This paper presents the application of particle swarm optimization (PSO) technique and its variants to least-cost generation expansion planning (GEP) problem. The GEP problem is a highly constrained, combinatorial optimization problem that can be solved by complete enumeration. PSO is one of the swarm intelligence (SI) techniques, which use the group intelligence behavior along with individual intelligence to solve the combinatorial optimization problem. A novel ‘virtual mapping procedure’ (VMP) is introduced to enhance the effectiveness of the PSO approaches. Penalty function approach (PFA) is used to reduce the number of infeasible solutions in the subsequent iterations. In addition to simple PSO, many variants such as constriction factor approach (CFA), Lbest model, hybrid PSO (HPSO), stretched PSO (SPSO) and composite PSO (C-PSO) are also applied to test systems. The differential evolution (DE) technique is used for parameter setting of C-PSO. The PSO and its variants are applied to a synthetic test system of five types of candidate units with 6- and 14-year planning horizon. The results obtained are compared with dynamic programming (DP) in terms of speed and efficiency.  相似文献   

12.
The authors explore the possibility of applying the Hopfield neural network to combinatorial optimization problems in power systems, in particular to unit commitment. A large number of inequality constraints included in unit commitment can be handled by dedicated neural networks. As an exact mapping of the problem onto the neural network is impossible with the state of the art, a two-step solution method was developed. First, generators to be stored up at each period are determined by the network, and then their outputs are adjusted by a conventional algorithm. The proposed neural network could solve a large-scale unit commitment problem with 30 generators over 24 periods, and results obtained were very encouraging  相似文献   

13.
In this paper, a method to teach advanced features of evolutionary algorithms (EAs), using a famous game known as Japanese puzzles is presented. The authors show that Japanese puzzles are constrained combinatorial optimization problems, that can be solved using EAs with different encodings, and are challenging problems for EAs. Other features, such as special operators and local search heuristics and its hybridization with genetic algorithms, can also be taught using these puzzles. The authors report an experience using this method in a course taught at the Universidad de Alcalaacute, Madrid, Spain  相似文献   

14.
The study of normalization is a fundamental topic that is covered in most introductory database courses taught by departments of Computer/Electrical Engineering and Computer Science. The typical pedagogical approach to normalization presents several classical algorithms, which are based upon the application of axioms and lemmas for manipulating functional dependencies, that can be used in the process of relational decomposition and synthesis. In this paper, an augmentation to the traditional pedagogical strategy is presented for introducing students to normalization and relational synthesis concepts. This augmentation transforms semantic concepts into Boolean form that can be easily manipulated with the Karnaugh map. The Karnaugh map provides an especially useful method for illustrating the process of determining the candidate keys of a relation, as well as simplifying the mechanics of manipulating functional dependencies that are required for database decomposition and synthesis. Moreover, students find the Karnaugh-map-based techniques faster for most calculations, as well as easier to apply than conventional algorithms, since most engineering students are more familiar with combinatorial Boolean algebra than the algebra of functional relations  相似文献   

15.
In this paper, a new codification is proposed for various meta-heuristic techniques to solve the reconfiguration problem of distribution networks. The full potential of meta-heuristic algorithms can be exploited by their efficient codification using some engineering knowledge base. The distribution system reconfiguration problems are non-differentiable, mixed integer and highly complex combinatorial in nature. In addition, the radiality constraint typically increases the intricacy of the meta-heuristic evolutionary algorithms. The proposed codification is based upon the fundamentals of graph theory which not only restricts the search space but also avoids tedious mesh checks. The proposed codification is computationally efficient and guarantees to generate only feasible radial topologies all times. The proposed method has been tested on three different test distribution systems and the results are promising.  相似文献   

16.
通过对OFDM的纠错技术、时频域交织技术、分集技术、信道均衡技术、信道映射技术等一系列物理层通信技术的研究,对各种技术的审慎选取及反复优化组合应用,达到克服时变、噪声、衰减、畸变等问题,形成一个在电力线这个恶劣信道上进行稳定、高效通信的鲁棒系统,进而解决用电信息采集系统中本地通信信道存在的一些问题;同时设计了一套载波通讯性能的评估方法,并对常见的几种载波方案进行了对比测试,以验证该方法的合理性及正确性。  相似文献   

17.
AutoCAD打印出图应注意的几个问题   总被引:1,自引:0,他引:1  
巴彤 《电力学报》2005,20(1):33-34
从选显示卡、布图、选比例、设打印范围和浓度、设线宽几个方面,介绍了使用AotoCAD绘图软件时,打印出图的正确操作方法和应注意的问题。  相似文献   

18.
Localization is a critical problem in the research of intelligent vehicles. Although it can be achieved by using a real-time kinematic global positioning system (RTK-GPS, or fused with other methods such as dead reckoning), it may be unfeasible if every vehicle has to be equipped with such an expensive sensor. This paper proposes a ground-texture-based map-matching approach to address the localization problem. To reduce the effect of complicated illumination in outdoor environments, a camera is fixed downward at the bottom of a vehicle, and controllable lights are also equipped around the camera for consistent illumination. The proposed approach includes two steps: 1) mapping and 2) localization. RTK-GPS is only used in the mapping, and other sensor data from camera and odometry are captured with time stamps to create a global ground texture map. A multiple-view registration-based optimization algorithm is applied to improve map accuracy. In the localization step, vehicle pose is estimated by matching the current camera frame with the best submap frame and by fusion strategy. Results with both synthetic and real experiments prove the feasibility and effectiveness of the proposed approach.  相似文献   

19.
This study presents a new approach using Hopfield neural networks for solving the economic dispatch (ED) problem with transmission capacity constraints. The proposed method is based on an improved Hopfield neural network which was presented by Gee et al. (1994). The authors discussed a new mapping technique for quadratic 0-1 programming problems with linear equality and inequality constraints. The special methodology improved the performance of Hopfield neural networks for solving combinatorial optimization problems. The authors have now modified Gee and Prager's (GP) method in order to solve ED with transmission capacity constraints. Constraints are handled using a combination of the GP model and the model of Abe et al. (1992). The proposed method (PHN) has achieved efficient and accurate solutions for two-area power systems with 3, 4, 40 and 120 units. The PHN results are very close to those obtained using the quadratic programming method  相似文献   

20.
Staying in Tune     
When performing daily life activities, appropriate sensory-motor transformations are required to successfully map the changing relationships among one?s self, the environment, and objects moving in the environment. Our daily actions involve varying combinations of head?eye (gaze), arm-reaching, and whole-body (stepping and walking) movements. These movements depend on the interaction and transformation of both egocentric (self to object) and allocentric (object to object) representations of the environment [1], [2]. To successfully map these representations, appropriate sensory-motor transformations are required [1], [3], [4]. For visually guided movements, the primary motor cortex and its interactions with the visual cortex, mainly through the dorsal stream [5], [6], are largely responsible for mapping the sensory-motor actions [7], [8]. Many uncontrollable factors can contribute to the degradation of our balance system; hence, it is important to maintain or retrain our sensorymotor system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号