首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper is a review of CAMAC hardware for high energy physics large spectrometers and control systems as well as the development of CAMAC modules at the High Energy Laboratory (HEL), JINR (Dubna). The principles of oraganization of the developed CAMAC systems are described.  相似文献   

2.
The field of high energy physics aims to discover the underlying structure of matter by searching for and studying exotic particles, such as the top quark and Higgs boson, produced in collisions at modern accelerators. Since such accelerators are extraordinarily expensive, extracting maximal information from the resulting data is essential. However, most accelerator events do not produce particles of interest, so making effective measurements requires event selection, in which events producing particles of interest (signal) are separated from events producing other particles (background). This article studies the use of machine learning to aid event selection. First, we apply supervised learning methods, which have succeeded previously in similar tasks. However, they are suboptimal in this case because they assume that the selector with the highest classification accuracy will yield the best final analysis; this is not true in practice, as such analyses are more sensitive to some backgrounds than others. Second, we present a new approach that uses stochastic optimization techniques to directly search for selectors that maximize either the precision of top quark mass measurements or the sensitivity to the presence of the Higgs boson. Empirical results confirm that stochastically optimized selectors result in substantially better analyses. We also describe a case study in which the best selector is applied to real data from the Fermilab Tevatron accelerator, resulting in the most precise top quark mass measurement of this type to date. Hence, this new approach to event selection has already contributed to our knowledge of the top quark's mass and our understanding of the larger questions upon which it sheds light.  相似文献   

3.
A review is given of the applications of the present means of analytical computation by computer in high energy physics. The use of the computer and, in particular, of the programming systems, SCHOONSCHIP, ASHMEDAI and REDUCE-2 at each step of quantum-field-theory calculations according to the Feynman diagram technique is discussed. The contribution of the computer in calculating the anomalous magnetic moment of the electron is considered in more detail.  相似文献   

4.
高能物理网格数据管理关键技术研究   总被引:1,自引:0,他引:1  
首先概要介绍高能物理网格的需求和发展,然后对其中数据管理的关键技术进行深入分析和探讨,包括名字服务、数据复制管理、数据传输、海量存储系统、用户访问接口等.最后,介绍一个用于高能物理网格数据管理的文件系统原型设计.  相似文献   

5.
Resource allocation for multi-tier web applications in virtualization environments is one of the most important problems in autonomous computing. On one hand, the more resources that are provisioned to a multitier web application, the easier it is to meet service level objectives (SLO). On the other hand, the virtual machine which hosts the multi-tier web application needs to be consolidated as much as possible in order to maintain high resource utilization. This paper presents an adaptive resource controller which consists of a feedback utilization controller and an auto-regressive and moving average model (ARMA)-based model estimator. It can meet application-level quality of service (QoS) goals while achieving high resource utilization. To evaluate the proposed controllers, simulations are performed on a testbed simulating a virtual data center using Xen virtual machines. Experimental results indicate that the controllers can improve CPU utilization and make the best tradeoff between resource utilization and performance for multi-tier web applications.  相似文献   

6.
Grid computing has become an effective computing technique in recent years. This paper develops a virtual workflow system to construct distributed collaborative applications for Grid users. The virtual workflow system consists three levels: abstract workflow system, translator and concrete workflow system. The research highlight of the implementation is that this workflow system is developed based on CORBA and Unicore Grid middleware. Furthermore, this implementation can support legacy application developed with Parco and C++ codes. This virtual workflow system can provide efficient GUI for users to organize distributed scientific collaborative applications and execute them on Grid resources. We present the design, implementation, and evaluation of this virtual workflow system in the paper.  相似文献   

7.
The high availability and commodity prices of Intel-based PCs have made them serious contenders to workstations. We address the question whether PCs are appropriate for running real physics codes. We evaluate the performance of Intel-based systems with two popular 32-bit operating systems, Linux and Windows NT. We report on a suite of benchmark tests, both generic and HEP-specific, where we compare the performance of these two systems to each other, and to other popular workstation class machines.  相似文献   

8.
9.
提出一种基于软件定义网络架构的大批量高能物理科学数据交换虚拟专用网思想,构建跨地域的高能物理实验合作单位之间SDN架构网络,设计数据传输智能路径选择算法,利用充足的IPv6链路资源服务于高能物理科学数据传输,解决大批量高能物理科学数据传输需求对现有网络带宽不足的压力,保障高能物理科学数据在合作单位之间高速、稳定、安全地传输。  相似文献   

10.
11.
Cloud computing provides scalable computing and storage resources over the Internet. These scalable resources can be dynamically organized as many virtual machines (VMs) to run user applications based on a pay-per-use basis. The required resources of a VM are sliced from a physical machine (PM) in the cloud computing system. A PM may hold one or more VMs. When a cloud provider would like to create a number of VMs, the main concerned issue is the VM placement problem, such that how to place these VMs at appropriate PMs to provision their required resources of VMs. However, if two or more VMs are placed at the same PM, there exists certain degree of interference between these VMs due to sharing non-sliceable resources, e.g. I/O resources. This phenomenon is called as the VM interference. The VM interference will affect the performance of applications running in VMs, especially the delay-sensitive applications. The delay-sensitive applications have quality of service (QoS) requirements in their data access delays. This paper investigates how to integrate QoS awareness with virtualization in cloud computing systems, such as the QoS-aware VM placement (QAVMP) problem. In addition to fully exploiting the resources of PMs, the QAVMP problem considers the QoS requirements of user applications and the VM interference reduction. Therefore, in the QAVMP problem, there are following three factors: resource utilization, application QoS, and VM interference. We first formulate the QAVMP problem as an Integer Linear Programming (ILP) model by integrating the three factors as the profit of cloud provider. Due to the computation complexity of the ILP model, we propose a polynomial-time heuristic algorithm to efficiently solve the QAVMP problem. In the heuristic algorithm, a bipartite graph is modeled to represent all the possible placement relationships between VMs and PMs. Then, the VMs are gradually placed at their preferable PMs to maximize the profit of cloud provider as much as possible. Finally, simulation experiments are performed to demonstrate the effectiveness of the proposed heuristic algorithm by comparing with other VM placement algorithms.  相似文献   

12.
高能物理实验不断的进步与发展产生了PB乃至EB级的数据,这些数据的采集、存储、传输与共享、分析与管理都面临着极大的问题与挑战。为了应对这些挑战,设计和实现了面向事例的数据管理系统,有效解决事例数据处理效率低以及分站点资源利用率低的问题。设计了一个基于Nosql数据库的事例索引系统,通过事例数据特征抽取,选取物理学家最感兴趣的属性作为索引,存储在数据库中,并采用倒排索引技术,提高事例数据检索的效率。针对事例数据进行缓存优化,减少数据转化和存储开销。提出数据跨域传输方案,充分利用网络带宽,降低分站点处理数据的延迟。系统进行了相关验证,实验结果表明,事例级的索引技术能够显著提高事例数据的检索效率,数据传输系统的网络带宽也可以利用到百分之九十以上。  相似文献   

13.
Theoretical predictions in high energy physics are routinely provided in the form of Monte Carlo generators. Comparisons of predictions from different programs and/or different initialization set-ups are often necessary. MC-TESTER can be used for such tests of decays of intermediate states (particles or resonances) in a semi-automated way. Our test consists of two steps. Different Monte Carlo programs are run; events with decays of a chosen particle are searched, decay trees are analyzed and appropriate information is stored. Then, at the analysis step, a list of all found decay modes is defined and branching ratios are calculated for both runs. Histograms of all scalar Lorentz-invariant masses constructed from the decay products are plotted and compared for each decay mode found in both runs. For each plot a measure of the difference of the distributions is calculated and its maximal value over all histograms for each decay channel is printed in a summary table. As an example of MC-TESTER application, we include a test with the τ lepton decay Monte Carlo generators, TAUOLA and PYTHIA. The HEPEVT (or LUJETS) common block is used as exclusive source of information on the generated events.

Program summary

Title of the program:MC-TESTER, version 1.1Catalogue identifier: ADSMProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSMProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer: PC, two Intel Xeon 2.0 GHz processors, 512MB RAMOperating system: Linux Red Hat 6.1, 7.2, and also 8.0Programming language used:C++, FORTRAN77: gcc 2.96 or 2.95.2 (also 3.2) compiler suite with g++ and g77Size of the package: 7.3 MB directory including example programs (2 MB compressed distribution archive), without ROOT libraries (additional 43 MB).No. of bytes in distributed program, including test data, etc.: 2 024 425Distribution format: tar gzip fileAdditional disk space required: Depends on the analyzed particle: 40 MB in the case of τ lepton decays (30 decay channels, 594 histograms, 82-pages booklet).Keywords: particle physics, decay simulation, Monte Carlo methods, invariant mass distributions, programs comparisonNature of the physical problem: The decays of individual particles are well defined modules of a typical Monte Carlo program chain in high energy physics. A fast, semi-automatic way of comparing results from different programs is often desirable, for the development of new programs, to check correctness of the installations or for discussion of uncertainties.Method of solution: A typical HEP Monte Carlo program stores the generated events in the event records such as HEPEVT or PYJETS. MC-TESTER scans, event by event, the contents of the record and searches for the decays of the particle under study. The list of the found decay modes is successively incremented and histograms of all invariant masses which can be calculated from the momenta of the particle decay products are defined and filled. The outputs from the two runs of distinct programs can be later compared. A booklet of comparisons is created: for every decay channel, all histograms present in the two outputs are plotted and parameter quantifying shape difference is calculated. Its maximum over every decay channel is printed in the summary table.Restrictions on the complexity of the problem: For a list of limitations see Section 6.Typical running time: Varies substantially with the analyzed decay particle. On a PC/Linux with 2.0 GHz processors MC-TESTER increases the run time of the τ-lepton Monte Carlo program TAUOLA by 4.0 seconds for every 100000 analyzed events (generation itself takes 26 seconds). The analysis step takes 13 seconds; processing takes additionally 10 seconds. Generation step runs may be executed simultaneously on multi-processor machines.Accessibility: web page: http://cern.ch/Piotr.Golonka/MC/MC-TESTER e-mails: Piotr.Golonka@CERN.CH, T.Pierzchala@friend.phys.us.edu.pl, Zbigniew.Was@CERN.CH.  相似文献   

14.
Grids facilitate creation of wide-area collaborative environment for sharing computing or storage resources and various applications. Inter-connecting distributed Grid sites through peer-to-peer routing and information dissemination structure (also known as Peer-to-Peer Grids) is essential to avoid the problems of scheduling efficiency bottleneck and single point of failure in the centralized or hierarchical scheduling approaches. On the other hand, uncertainty and unreliability are facts in distributed infrastructures such as Peer-to-Peer Grids, which are triggered by multiple factors including scale, dynamism, failures, and incomplete global knowledge.In this paper, a reputation-based Grid workflow scheduling technique is proposed to counter the effect of inherent unreliability and temporal characteristics of computing resources in large scale, decentralized Peer-to-Peer Grid environments. The proposed approach builds upon structured peer-to-peer indexing and networking techniques to create a scalable wide-area overlay of Grid sites for supporting dependable scheduling of applications. The scheduling algorithm considers reliability of a Grid resource as a statistical property, which is globally computed in the decentralized Grid overlay based on dynamic feedbacks or reputation scores assigned by individual service consumers mediated via Grid resource brokers. The proposed algorithm dynamically adapts to changing resource conditions and offers significant performance gains as compared to traditional approaches in the event of unsuccessful job execution or resource failure. The results evaluated through an extensive trace driven simulation show that our scheduling technique can reduce the makespan up to 50% and successfully isolate the failure-prone resources from the system.  相似文献   

15.
This work is devoted to a feasibility analysis for the development of novel fiber optic humidity sensors to be applied in high-energy physics (HEP) applications and in particular in experiments actually running at the European Organization for Nuclear Research (CERN). On this line of argument and due to the wide investigations carried out in the last years aimed to assess the radiation hardness capability of fiber optic technology in high energy physics environments, our multidisciplinary research group has been recently engaged in the development of near-field fiber optic sensors based on particle layers of tin dioxide to perform the monitoring of low values of relative humidity RH even at low temperatures.While this sensor type has been successfully employed for ppm and sub-ppm chemical detection in air and water environments, it is the first reported use for relative humidity measurements.The RH sensing performance of fabricated probes was analyzed during a deep experimental campaign carried out in the laboratories of CERN, in Genève. A very good agreement was observed between humidity measurements provided by the optical fiber sensors and commercial polymer-based hygrometers at 20 °C and 0 °C, with limits of detection for low RH regimes below 0.1%.  相似文献   

16.
This work focuses on the use of computational Grids for processing the large set of jobs arising in parameter sweep applications. In particular, we tackle the mapping of molecular potential energy hypersurfaces. For computationally intensive parameter sweep problems, performance models are developed to compare the parallel computation in a multiprocessor system with the computation on an Internet‐based Grid of computers. We find that the relative performance of the Grid approach increases with the number of processors, being independent of the number of jobs. The experimental data, obtained using electronic structure calculations, fit the proposed performance expressions accurately. To automate the mapping of potential energy hypersurfaces, an application based on GRID superscalar is developed. It is tested on the prototypical case of the internal dynamics of acetone. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
目的解决疗养院在有限的硬件成本投入条件下,提高硬件利用率,并对信息系统的整合以及保证其业务的连续性.方法对现有服务器通过VMware虚拟化技术进行整合,再配置存储用于数据存储.结果实现了对系统业务的整合,提升了硬件的利用率,减少了成本,并做到从服务器到存储的高可用性,保证了系统业务的不间断运行.结论虚拟化技术可以有效的解决疗养院各个独立系统的整合,降低硬件成本投入,提升硬件利用率.  相似文献   

18.
19.
基于硬件虚拟化实现多结点单一系统映像   总被引:1,自引:0,他引:1       下载免费PDF全文
实现多结点单一系统映像SS(ISingle System Image)是并行计算机体系结构研究的一个重要方向。当前,国内、外关于SSI的大量研究工作是在中间件层(MiddlewareLevel)开展的,存在透明性较差和性能较低等问题。提出了一种实现多结点SSI的新方法,即利用硬件虚拟化技术,在操作系统OS(Operating System)之下构建分布式虚拟机监视器DVMM(Distributed Virtual Machine Monitor),DVMM由各结点之上的VMM(Virtual Machine Monitor)共同组成,各VMM完全对称;通过各结点的VMM之间协作,实现多结点系统资源的感知、整合、虚拟化和呈现,使多结点对OS呈现为SSI;通过DVMM与OS配合,实现在多结点系统上透明地运行并行应用软件。同现有方法相比,所述方法具有透明性好、性能较高、应用面广和实现难度适中等优势。  相似文献   

20.
近年来,许多图书馆在对馆内软硬件设施进行升级改造的过程中,应用了越来越多的系统及智能化设备,极大地提高了图书馆的工作效率和服务水平,但同时也给后台服务器及存储设备增加巨大的压力.为了解决这些问题,引进VMware vSphere 5.5虚拟化系统,实现对中心机房服务器的虚拟化也成为许多图书馆的选择.以山东师范大学图书馆服务器虚拟化的应用为例,分析总结了虚拟化平台应用中的一些经验和体会,提出了快照管理、安全防护、断电处理等方面的合理建议.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号