首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The first version of MDSplus was released in 1991 for VAX/VMS. Since that time the underlying file formats have remained constant. The software however has evolved, it was ported to unix, linux, Windows, and Macintosh. In 1997 a TCP based protocol, mdsip, was added to provide network access to MDSplus data. In 2011 a mechanism was added to allow protocol plugins to permit the use of other transport mechanisms such as ssh to access data users. This paper describes a similar design which permits the insertion of plugins to handle the reading and writing of MDSplus data at the data storage level. Tree paths become URIs which specify the protocol, host, and protocol specific information. The protocol is provided by a dynamically activated shared library that can provide any consistent subset of the data store access API, treeshr. The existing low level network protocol called mdsip, is activated by defining tree paths like “host::/directory”. Using the new plugin mechanism this is re-implemented as an instance of the general plugin that replaces the low level treeshr input/output routines. It is specified by using a path like “mdsip://host/directory”.This architecture will make it possible to adapt the MDSplus data organization and analysis tools to other underlying data storage. The first new application of this, after the existing network protocol is implemented, will be a plugin based on a key value store. Key value stores, can provide inexpensive scalable, redundant data storage. An example of this might be an Amazon G3 plugin which would let you specify a tree path such as “AG3://container” to access MDSplus data stored in the cloud.  相似文献   

2.
3.
The National Spherical Tokamak Experiment (NSTX) is undergoing a wealth of upgrades (NSTX-U). These upgrades, especially including an elongated pulse length, require broad changes to the control system that has served NSTX well. A new fiber serial Front Panel Data Port input and output (I/O) stream will supersede the aging copper parallel version. Driver support for the new I/O and cyber security concerns require updating the operating system from Redhat Enterprise Linux (RHEL) v4 to RedHawk (based on RHEL) v6. While the basic control system continues to use the General Atomics Plasma Control System (GA PCS), the effort to forward port the entire software package to run under 64-bit Linux instead of 32-bit Linux included PCS modifications subsequently shared with GA and other PCS users. Software updates focused on three key areas: (1) code modernization through coding standards (C99/C11), (2) code portability and maintainability through use of the GA PCS code generator, and (3) support of 64-bit platforms. Central to the control system upgrade is the use of a complete real time (RT) Linux platform provided by Concurrent Computer Corporation, consisting of a computer (iHawk), an operating system and drivers (RedHawk), and RT tools (NightStar). Strong vendor support coupled with an extensive RT toolset influenced this decision. The new real-time Linux platform, I/O, and software engineering will foster enhanced capability and performance for NSTX-U plasma control.  相似文献   

4.
The first version of MDSplus was released in 1991 for VAX/VMS. Since then MDSplus has been progressively adopted in an increasing number of fusion experiments and its original implementation has been extended during these years to cover new requirements and toward a multi-platform implementation. Currently MDSplus is in use at more than 30 laboratories and is being used both for pulsed applications as well as for continuous data streaming for long lasting experiments. Thanks to its large user base, it has been possible to collect requirements driving the evolution of the system toward improved usability and better performance. An important recent feature of the MDSplus is its ability of handling a continuous stream of data, which is readily available as soon at it has been stored in the pulse files. Current development is oriented toward an improved modularity of MDSplus and the integration of new functionality.Improved modularity is achieved by moving away from monolithic implementation toward a plug-ins approach. This has already been achieved recently for the management of remote data access, where the original TCP/IP implementation can now be integrated with new user-provided network protocols. Following a similar approach, work is in progress to let new back-ends be integrated in the MDSplus data access layer. By decoupling the MDSplus data management from the disk data file format it is possible to integrate new solutions such as data cloud without affecting the user Application Programming Interface.  相似文献   

5.
A long pulse discharge requires high throughput data acquisition. As more physics diagnostics with high sampling rate are applied and the pulse length becomes longer, the original EAST (Experimental Advanced Superconducting Tokamak) data system no longer satisfies the requirements of real-time data storage and quick data access. A new system was established to integrate various data acquisition hardware and software for easy expansion and management of the system. Slice storage mechanism in MDSplus is now being used for the continuous and quasi real-time data storage. For every data acquisition thread and process, sufficient network bandwidth is ensured. Moreover, temporal digitized data is cached in computer memory in doubly linked circular lists to avoid the possible data loss by the occasional temporal storage or transfer jam. These data are in turn archived in MDSplus format by using slice storage mechanism called “segments”. For the quick access of the archived data to the users, multiple data servers are used. These data servers are linked using LVS (Linux Virtual server) load balance technology to provide a safe, highly scalable and available data service.  相似文献   

6.
The Java tool jScope has been widely used for years to display acquired waveform in MDSplus. The choice of the Java programming language for its implementation has been successful for several reasons among which the fact that Java supports a multiplatform environment and it is well suited for graphics and the management of network communication. jScope can be used both as a local and remote application. In the latter case, data are acquired via TCP/IP communication using the mdsip protocol. Exporting data in this way however introduces several security problems due to the necessity of opening firewall holes for the user ports. For this reason, and also due to the fact that JavaScript is becoming a widely used language for web applications, a new tool written in JavaScript and called WebScope has been developed for the visualization of MDSplus data in web browsers.Data communication is now achieved via http protocol using Asynchronous JavaScript and XML (AJAX) technology. At the server side, data access is carried out by a Python module that interacts with the web server via Web Server Gateway Interface (WSGI).When a data item, described by an MDSplus expression, is requested by the web browser for visualization, it is returned as a binary message and then handled by callback JavaScript functions activated by the web browser.Scalable Vector Graphics (SVG) technology is used to handle graphics within the web browser and to carry out the same interactive data visualization provided by jScope. In addition to mouse events, touch events are supported to provide interactivity also on touch screens. In this way, waveforms can be displayed and manipulated on tablet and smartphone devices.  相似文献   

7.
The experimental data of J-TEXT tokamak are stored in the MDSplus database. The old J-TEXT data access system is based on the tools provided by MDSplus. Since the number of signals is huge, the data retrieval for an experiment is difficult. To solve this problem, the J-TEXT experimental data access and management system (DAMS) based on MDSplus has been developed. The DAMS left the old MDSplus system unchanged providing new tools, which can help users to handle all signals as well as to retrieve signals they need thanks to the user information requirements. The DAMS also offers users a way to create their jScope configuration files which can be downloaded to the local computer. In addition, the DAMS provides a JWeb-Scope tool to visualize the signal in a browser. JWeb-Scope adopts segment strategy to read massive data efficiently. Users can plot one or more signals on their own choice and zoom-in, zoom-out smoothly. The whole system is based on B/S model, so that the users only need of the browsers to access the DAMS. The DAMS has been tested and it has a better user experience. It will be integrated into the J-TEXT remote participation system later.  相似文献   

8.
基于ROOT软件包的数据远程获取系统编程   总被引:6,自引:4,他引:2  
在Linux操作系统中利用ROOT数据软件包,可以开发出稳定可靠的远程数据获取系统。可以通过Intranet网络协议远程获取来自CAMAC机箱控制器的大量前端实验数据,并通过ROOT软件包开发用户操作图形界面,对数据进行离线分析。利用带有三种灵活数据接口方式:以太网、PCI和VSB接口的CAMAC机箱控制器一GTBC,并对其内含的嵌入式系统芯片进行网络服务器编程。可对实验数据进行稳定可靠的远程存取和图谱分析。  相似文献   

9.
A data service system plays an indispensable role in HT-7 Tokamak experiment. Since the former system doesn't provide the function of timely data procession and analysis, and all client software are based on Windows, it can't fulfill virtual fusion laboratory for remote researchers. Therefore, a new system which is simplified by three kinds of data servers and one data analysis and visualization software tool has been developed. The data servers include a data acquisition server based on file system, an MDSplus server used as the central repository for analysis data, and a web server. Users who prefer the convenience of application that can be run in a Web Browser can easily access the experiment data without knowing X-Windows. In order to adjust instruments to control experiment the operators need to plot data duly as soon as they are gathered. To satisfy their requirement, an upgraded data analysis and visualization software GT-7 is developed. It not only makes 2D data visualization more efficient, but also it can be capable of processing, analyzing and displaying interactive 2D and 3D graph of raw. analyzed data by the format of ASCII, LZO and MDSplus.  相似文献   

10.
A new data access system, H1DS, has been developed and deployed for the H-1 Heliac at the Australian Plasma Fusion Research Facility. The data system provides access to fusion data via a RESTful web service. With the URL acting as the API to the data system, H1DS provides a scalable and extensible framework which is intuitive to new users, and allows access from any internet connected device. The H1DS framework, originally designed to work with MDSplus, has a modular design which can be extended to provide access to alternative data storage systems.  相似文献   

11.
核保障无损分析设备中图形界面软件的研制   总被引:1,自引:1,他引:0  
首先介绍了图形界面技术的一般性方法,提出了实际应用系统中正文构造、多层次下拉式菜单、多窗口弹出式交户图形信息显示等人机接口基础模块的设计技术。最后给出在核保障无损分析设备中的应用实例,表明该软件系统在界面友善性、谱形动态活显及打印描谱等方面均优于目前国际上的通用软件,如美国EG&G ORTEC公司的MCA产品。本文的程序设计方法,既适用于核谱获取及处理系统,其有关算法也适用于一般性微机图形界面的设计。  相似文献   

12.
嵌入式Linux系统在PowerPC上的实现   总被引:1,自引:0,他引:1  
介绍了一个在PowerPC上建立嵌入式Linux操作系统的应用实例.该系统的主要目的是用于VME总线设备的测试,也可用于小型数据获取与控制系统.建立系统的过程中充分考虑到软件开发以及PowerPC的特点,为开发者提供两种可选的程序编译测试方法.介绍了几种典型嵌入式操作系统,比较商业嵌入式系统与主流Linux嵌入式系统的异同以及它们的优缺点;就如何根据特定目标板建立一个可裁减的嵌入式Linux操作系统做了详细的阐述;给出了一个可以成功运行的测试实例.  相似文献   

13.
Abstract

The Nuclear Decommissioning Authority (NDA) is developing a family of Standard Waste Transport Containers (SWTCs) for the transport of unshielded intermediate level radioactive waste packages. The SWTCs are shielded transport containers designed to carry different types of waste packages. The combination of the SWTC and the waste package is required to meet the regulatory requirements for Type B packages. One such requirement relates to the containment of the radioactive contents, with the IAEA Transport Regulations specifying release limits for normal and accident conditions of transport. In the impact tests representing accident conditions of transport, the waste package will experience significant damage and radioactive material will be released into the SWTC cavity. It is therefore necessary to determine how much of this material will be released from the cavity to the external environment past the SWTC seals. Typical assessments use the approach of assuming that the material will be evenly distributed within the cavity volume and then determining the rate at which gas will be released from the cavity, with the volume of radioactive material released with the gas based on the concentration of the material within the cavity gas. This is a pessimistic approach as various deposition processes would reduce the concentration of gas-borne particulate material and hence reduce their release rate from the SWTC. This paper assesses these physical processes that control the release rate and develops a conservative methodology for calculating the particulate releases from the SWTC lid and valve seals under normal and accident conditions of transport, in particular:

a) the flows within the SWTC cavity, especially those near the cavity walls;

b) the aerodynamic forces necessary to detach small particles from the cavity surface and suspend them into the cavity volume;

c) the adhesive forces holding contaminant particles on the surface of a waste package;

d) the breakup of waste material upon impact that will determines the volume fraction and size distribution of fine particulate released into the cavity.

Three mechanisms are specifically modelled, namely Brownian agglomeration, Brownian diffusion and gravitational settling, since they are the dominant processes that lead to deposition within the cavity and the easiest to calculate with much less uncertainty than the other deposition processes. Calculations of releases under normal conditions of transport concentrate on estimating the detachment of any waste package surface contamination by inertial and aerodynamic forces and show that very little of any contamination removed from the waste package surface would be released from the SWTC. Under accident conditions of transport, results are presented for the fraction released from the SWTC to the environment as a function of the volume fraction of the waste package contents released as fine particulate matter into the SWTC cavity. These show that for typical release fractions of 10-6 to 10-8 for the release of radioactive material from waste packages into the SWTC cavity, the release fraction of the waste package inventory from the SWTC of typically 10-9 to 10-10. Hence, the effective decontamination factor provided by the SWTC is 102 to 103. Whilst this analysis has been carried out specifically for the SWTC carrying waste packages, it is applicable to other arrangements and its use would reduce the high degree of pessimism used in typical containment assessments, whilst still giving conservative results.  相似文献   

14.
In large fusion experiments, such as tokamak devices, there is a common trend for slow control systems. Because of complexity of the plants, the so-called ‘Standard Model‘ (SM) in slow control has been adopted on several tokamak machines. This model is based on a three-level hierarchical control: 1) High-Level Control (HLC) with a supervisory function; 2) Medium-Level Control (MLC) to interface and concentrate I/O field equipments; 3) Low-Level Control (LLC) with hard real-time I/O function, often managed by PLCs. FTU control system designed with SM concepts has underwent several stages of developments in its fifteen years duration of runs. The latest evolution was inevitable, due to the obsolescence of the MLC CPUs, based on VME-MOTOROLA 68030 with OS9 operating system. A large amount of C code was developed for that platform to route the data flow from LLC, which is constituted by 24 Westinghouse Numalogic PC-700 PLCs with about 8000 field-points, to HLC, based on a commercial Object-Oriented Real-Time database on Alpha/CompaqTru64 platform. Therefore, we have to look for cost-effective solutions and finally a CompactPCI-Intel x86 platform with Linux operating system was chosen. A software porting has been done, taking into account the differences between OS9 and Linux operating system in terms of Inter/NetworkProcesses Communications and I/O multi-ports serial driver. This paper describes the hardware/software architecture of the new MLC system, emphasizing the reliability and the low costs of the open source solutions. Moreover, a huge amount of software packages available in open source environment will assure a less painful maintenance, and will openthe way to further improvements of the system itself.  相似文献   

15.
16.
A revised engineered barrier system model has been developed by the Electric Power Research Institute to predict the time dependence of the failure of the drip shields and waste packages in the proposed Yucca Mountain repository. The revised model is based on new information on various corrosion processes developed by the US Department of Energy and others and for a 20-mm-thick waste package design with a double closure lid system. As with earlier versions of the corrosion model, the new EBSCOM code produces a best-estimate of the failure times of the various barriers. The model predicts that only 15% of waste packages will fail within a period of 1 million years. The times for the first corrosion failures are 40,000 years, 336,000 years, and 375,000 years for the drip shield, waste package, and combination of drip shield and the associated waste package, respectively.  相似文献   

17.
18.
With the continuous renewal and increasing number of diagnostics, the EAST tokamak routinely generates ∼3 GB of raw data per pulse of the experiment, which is transferred to a centralized data management system. In order to strengthen international cooperation, all the acquired data has been converted and stored in the MDSplus servers. During the data system operation, there are some problems when a lot of client machines connect to a single MDSplus data server. Because the server process keeps the connection until the client closes its connection, a lot of server processes use a lot of network ports and consume a large amount of memory, so that the speed of access to data is very slow, but the CPU resource is not fully utilized. To improve data management system performance, many MDSplus servers will be installed on the blade server and form a server cluster to realize load balancing and high availability by using LVS and heartbeat technology. This paper will describe the details of the design and the test results.  相似文献   

19.
20.
The Controls and Information Systems (CIS) organization for the National Ignition Facility (NIF) has developed controls, configuration and analysis software applications that combine for several million lines of code. The team delivers updates throughout the year, from major releases containing hundreds of changes to patch releases containing a small number of focused updates. To ensure the quality of each delivery, manual and automated tests are performed using the NIF TestController test infrastructure. The TestController system provides test inventory management, test planning, automated and manual test execution, release testing summaries and results search, all through a web browser interface. As part of the three-stage software testing strategy, the NIF TestController system helps plan, evaluate and track the readiness of each release to the NIF production environment.After several years of use in testing NIF software applications, the TestController's manual testing features have been leveraged for verifying the installation and operation of NIF Target Diagnostic hardware. The TestController recorded its first test results in 2004. Today, the system has recorded the execution of more than 160,000 tests and continues to play a central role in ensuring that NIF hardware and software meet the requirements of a high reliability facility. This paper describes the TestController system and discusses its use in assuring the quality of software delivered to the NIF.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号