首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Support of the MDSplus data handling system has been enhanced by the addition of an automated build system which does nightly builds of MDSplus for many computer platforms producing software packages which can now be downloaded using a web browser or via package repositories suitable for automatic updating. The build system was implemented using an extensible continuous integration server product called Hudson which schedules software builds on a collection of VMware based virtual machines. New releases are created based on updates via the MDSplus cvs code repository and versioning are managed using cvs tags and branches. Currently stable, beta and alpha releases of MDSplus are maintained for eleven different platforms including Windows, MacOSX, RedHat Enterprise Linux, Fedora, Ubuntu and Solaris. For some of these platforms, MDSplus packaging has been broken into functional modules so users can pick and choose which MDSplus features they want to install. An added feature to the latest Linux based platforms is the use of package dependencies. When installing MDSplus from the package repositories, any additional required packages used by MDSplus will be installed automatically greatly simplifying the installation of MDSplus. This paper will describe the MDSplus package automated build and distribution system.  相似文献   

2.
The first version of MDSplus was released in 1991 for VAX/VMS. Since then MDSplus has been progressively adopted in an increasing number of fusion experiments and its original implementation has been extended during these years to cover new requirements and toward a multi-platform implementation. Currently MDSplus is in use at more than 30 laboratories and is being used both for pulsed applications as well as for continuous data streaming for long lasting experiments. Thanks to its large user base, it has been possible to collect requirements driving the evolution of the system toward improved usability and better performance. An important recent feature of the MDSplus is its ability of handling a continuous stream of data, which is readily available as soon at it has been stored in the pulse files. Current development is oriented toward an improved modularity of MDSplus and the integration of new functionality.Improved modularity is achieved by moving away from monolithic implementation toward a plug-ins approach. This has already been achieved recently for the management of remote data access, where the original TCP/IP implementation can now be integrated with new user-provided network protocols. Following a similar approach, work is in progress to let new back-ends be integrated in the MDSplus data access layer. By decoupling the MDSplus data management from the disk data file format it is possible to integrate new solutions such as data cloud without affecting the user Application Programming Interface.  相似文献   

3.
The Java tool jScope has been widely used for years to display acquired waveform in MDSplus. The choice of the Java programming language for its implementation has been successful for several reasons among which the fact that Java supports a multiplatform environment and it is well suited for graphics and the management of network communication. jScope can be used both as a local and remote application. In the latter case, data are acquired via TCP/IP communication using the mdsip protocol. Exporting data in this way however introduces several security problems due to the necessity of opening firewall holes for the user ports. For this reason, and also due to the fact that JavaScript is becoming a widely used language for web applications, a new tool written in JavaScript and called WebScope has been developed for the visualization of MDSplus data in web browsers.Data communication is now achieved via http protocol using Asynchronous JavaScript and XML (AJAX) technology. At the server side, data access is carried out by a Python module that interacts with the web server via Web Server Gateway Interface (WSGI).When a data item, described by an MDSplus expression, is requested by the web browser for visualization, it is returned as a binary message and then handled by callback JavaScript functions activated by the web browser.Scalable Vector Graphics (SVG) technology is used to handle graphics within the web browser and to carry out the same interactive data visualization provided by jScope. In addition to mouse events, touch events are supported to provide interactivity also on touch screens. In this way, waveforms can be displayed and manipulated on tablet and smartphone devices.  相似文献   

4.
Each plasma physics laboratory has a proprietary scheme to control and data acquisition system. Usually, it is different from one laboratory to another. It means that each laboratory has its own way to control the experiment and retrieving data from the database. Fusion research relies to a great extent on international collaboration and this private system makes it difficult to follow the work remotely. The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The choice of MDSplus (Model Driven System plus) is proved by the fact that it is widely utilized, and the scientists from different institutions may use the same system in different experiments in different tokamaks without the need to know how each system treats its acquisition system and data analysis. Another important point is the fact that the MDSplus has a library system that allows communication between different types of language (JAVA, Fortran, C, C++, Python) and programs such as MATLAB, IDL, OCTAVE. In the case of tokamak TCABR interfaces (object of this paper) between the system already in use and MDSplus were developed, instead of using the MDSplus at all stages, from the control, and data acquisition to the data analysis. This was done in the way to preserve a complex system already in operation and otherwise it would take a long time to migrate. This implementation also allows add new components using the MDSplus fully at all stages.  相似文献   

5.
A data service system plays an indispensable role in HT-7 Tokamak experiment. Since the former system doesn't provide the function of timely data procession and analysis, and all client software are based on Windows, it can't fulfill virtual fusion laboratory for remote researchers. Therefore, a new system which is simplified by three kinds of data servers and one data analysis and visualization software tool has been developed. The data servers include a data acquisition server based on file system, an MDSplus server used as the central repository for analysis data, and a web server. Users who prefer the convenience of application that can be run in a Web Browser can easily access the experiment data without knowing X-Windows. In order to adjust instruments to control experiment the operators need to plot data duly as soon as they are gathered. To satisfy their requirement, an upgraded data analysis and visualization software GT-7 is developed. It not only makes 2D data visualization more efficient, but also it can be capable of processing, analyzing and displaying interactive 2D and 3D graph of raw. analyzed data by the format of ASCII, LZO and MDSplus.  相似文献   

6.
7.
The experimental data of J-TEXT tokamak are stored in the MDSplus database. The old J-TEXT data access system is based on the tools provided by MDSplus. Since the number of signals is huge, the data retrieval for an experiment is difficult. To solve this problem, the J-TEXT experimental data access and management system (DAMS) based on MDSplus has been developed. The DAMS left the old MDSplus system unchanged providing new tools, which can help users to handle all signals as well as to retrieve signals they need thanks to the user information requirements. The DAMS also offers users a way to create their jScope configuration files which can be downloaded to the local computer. In addition, the DAMS provides a JWeb-Scope tool to visualize the signal in a browser. JWeb-Scope adopts segment strategy to read massive data efficiently. Users can plot one or more signals on their own choice and zoom-in, zoom-out smoothly. The whole system is based on B/S model, so that the users only need of the browsers to access the DAMS. The DAMS has been tested and it has a better user experience. It will be integrated into the J-TEXT remote participation system later.  相似文献   

8.
The MAST (Mega-Amp Spherical Tokamak) data acquisition system is being radically upgraded. New hardware with completely different control interface and logic has been installed on all system levels from front-end devices to plant control. MAST plant control has been moved from VMS to a Windows-based OPC system. Old CAMAC and VME units are being replaced by cPCI and PXI units. A number of CAMAC crates have been upgraded with new Ethernet controllers supporting useful front-end devices.The upgrade is being performed without disturbing operations; the data acquisition units are being replaced gradually. Such an upgrade is possible due to the structure of the MAST data acquisition system which is build as a set of autonomous units, each one controlled by a computer. Modern computers are capable of controlling several units, and this has been the major opportunity and challenge because it radically changes the unit control logic. As a result practically all system components had to be redesigned.The new unit software is a step in system evolution towards greater flexibility and universality. Each unit can now manage multiple data files, possibly with different formats, and many units can be hosted on the same computer. This feature is provided by a message proxy server. Each unit is controlled independently and transparently, exactly like a stand-alone unit. A message interface has been modified for consistent handling of new functions. The unit software supports event-triggered and real-time data acquisition at the system level. New software has been developed for a number of new hardware devices, and the device modules for all usable old devices have been rewritten to operate with the new control interface.The new software allows units to be upgraded even during operations. The system structure and logic provide easy extension. The system as a whole or system design elements could also be used on other fusion facilities.  相似文献   

9.
A long pulse discharge requires high throughput data acquisition. As more physics diagnostics with high sampling rate are applied and the pulse length becomes longer, the original EAST (Experimental Advanced Superconducting Tokamak) data system no longer satisfies the requirements of real-time data storage and quick data access. A new system was established to integrate various data acquisition hardware and software for easy expansion and management of the system. Slice storage mechanism in MDSplus is now being used for the continuous and quasi real-time data storage. For every data acquisition thread and process, sufficient network bandwidth is ensured. Moreover, temporal digitized data is cached in computer memory in doubly linked circular lists to avoid the possible data loss by the occasional temporal storage or transfer jam. These data are in turn archived in MDSplus format by using slice storage mechanism called “segments”. For the quick access of the archived data to the users, multiple data servers are used. These data servers are linked using LVS (Linux Virtual server) load balance technology to provide a safe, highly scalable and available data service.  相似文献   

10.
The first version of MDSplus was released in 1991 for VAX/VMS. Since that time the underlying file formats have remained constant. The software however has evolved, it was ported to unix, linux, Windows, and Macintosh. In 1997 a TCP based protocol, mdsip, was added to provide network access to MDSplus data. In 2011 a mechanism was added to allow protocol plugins to permit the use of other transport mechanisms such as ssh to access data users. This paper describes a similar design which permits the insertion of plugins to handle the reading and writing of MDSplus data at the data storage level. Tree paths become URIs which specify the protocol, host, and protocol specific information. The protocol is provided by a dynamically activated shared library that can provide any consistent subset of the data store access API, treeshr. The existing low level network protocol called mdsip, is activated by defining tree paths like “host::/directory”. Using the new plugin mechanism this is re-implemented as an instance of the general plugin that replaces the low level treeshr input/output routines. It is specified by using a path like “mdsip://host/directory”.This architecture will make it possible to adapt the MDSplus data organization and analysis tools to other underlying data storage. The first new application of this, after the existing network protocol is implemented, will be a plugin based on a key value store. Key value stores, can provide inexpensive scalable, redundant data storage. An example of this might be an Amazon G3 plugin which would let you specify a tree path such as “AG3://container” to access MDSplus data stored in the cloud.  相似文献   

11.
随着J-TEXT装置的发展,原有的数据采集系统在稳定性、模块化、采样率等方面已不能满足装置运行的需要,所以需建立一套新的数据采集系统来满足实验需求。本文介绍了基于PXI Express的托卡马克分布式高速同步数据采集系统的设计与实现。系统的采集单元由PXIe机箱NI PXIe-1062Q、PXIe控制器NI PXIe-8133和高速同步数据采集卡NI PXIe-6368组成,兼容ITER CODAC最新标准,具有良好的机械封装性、模块化程度高和高采样率等优点。系统采用同步差分采集方式采集实验数据,并将数据存储于核聚变领域通用的MDSplus数据库中。测试和使用结果表明,系统能在2 MSps采样率下连续稳定工作,可较好地满足装置运行的需要。  相似文献   

12.
The magnet system of the Steady-State Superconducting Tokamak-1 at the Institute for Plasma Research, Gandhinagar, India, consists of sixteen toroidal field and nine poloidal field. Superconducting coils together with a pair of resistive PF coils, an air core ohmic transformer and a pair of vertical field coils. These magnets are instrumented with various cryogenic compatible sensors and voltage taps for its monitoring, operation, protection, and control during different machine operational scenarios like cryogenic cool down, current charging cycles including ramp up, flat top, plasma breakdown, dumping/ramp down and warm up. The data acquisition system for these magnet instrumentation have stringent requirement regarding operational flexibility, reliability for continuous long term operation and data visualization during operations. A VME hardware based data acquisition system with ethernet based remote system architecture is implemented for data acquisition and control of the complete magnet operation. Software application is developed in three parts namely an embedded VME target, a network server and a remote client applications. A target board application implemented with real time operating system takes care of hardware configuration and raw data transmission to server application. A java server application manages several activities mainly multiple client communication over ethernet, database interface and data storage. A java based platform independent desktop client application is developed for online and offline data visualization, remote hard ware configuration and many other user interface tasks. The application has two modes of operation to cater to different needs of cool-down and charging operations. This paper describes application architecture, installation and commissioning and operational experience from the recent campaigns of SST-1.  相似文献   

13.
The Tandem Van de Graaff at BNL has been using a computerized data acquisition system for more than eight years. A brief history of its philosophy and performance will be discussed. Also to be presented is the new data collection system which is designed around a high-speed bus with multiple processors. This new system is capable of handling event rates of over 200 kHz for add-one to memory (PHA) type spectra totalling 512k channels. List-mode, special sorting involving gating, or arithmetic operations of the data are supported in addition to one or two parameter PHA types. The system allows eight configurations of eight devices to be defined by the user. The devices are typically analog-to-digital converters (ADCs), multi-channel scalers, or routing devices. However any device with digital information could be interfaced.  相似文献   

14.
A new data access system, H1DS, has been developed and deployed for the H-1 Heliac at the Australian Plasma Fusion Research Facility. The data system provides access to fusion data via a RESTful web service. With the URL acting as the API to the data system, H1DS provides a scalable and extensible framework which is intuitive to new users, and allows access from any internet connected device. The H1DS framework, originally designed to work with MDSplus, has a modular design which can be extended to provide access to alternative data storage systems.  相似文献   

15.
This paper describes a data acquisition system for radiation monitoring which significantly improves performance over conventional systems by providing higher throughput, elimination of data skew, easier and inexpensive isolation, improved system accuracy, and compact implementation. The novel systolic data acquisition system, including systolic converter, processor and networking was developed to alleviate drawbacks of various conventional data acquisition systems used in radiation monitoring. The system is based on a systolic conversion, processing and networking method amenable to highly integrated vector architecture. The method employs systolic rules which can be developed for a selected problem. The rules for the radiation monitoring problem have been developed so as to apply not only locally but also globally to the systolic network. A form of the network has been implemented and is operational in a nuclear reactor site. Other forms are being implemented and tested for other data skew sensitive problems.  相似文献   

16.
Real-time signal processing in plasma fusion experiments is required for control and for data reduction as plasma pulse times grow longer. The development time and cost for these high-rate, multichannel signal processing systems can be significant. This paper proposes a new digital signal processing (DSP) platform for the data acquisition system that will allow users to easily customize real-time signal processing systems to meet their individual requirements.The D-TACQ reconfigurable user in-line DSP (DRUID) system carries out the signal processing tasks in hardware co-processors (CPs) implemented in an FPGA, with an embedded microprocessor (μP) for control. In the fully developed platform, users will be able to choose co-processors from a library and configure programmable parameters through the μP to meet their requirements.The DRUID system is implemented on a Spartan 6 FPGA, on the new rear transition module (RTM-T), a field upgrade to existing D-TACQ digitizers.As proof of concept, a multiply-accumulate (MAC) co-processor has been developed, which can be configured as a digital chopper-integrator for long pulse magnetic fusion devices. The DRUID platform allows users to set options for the integrator, such as the number of masking samples. Results from the digital integrator are presented for a data acquisition system with 96 channels simultaneously acquiring data at 500 kSamples/s per channel.  相似文献   

17.
With the continuous renewal and increasing number of diagnostics, the EAST tokamak routinely generates ∼3 GB of raw data per pulse of the experiment, which is transferred to a centralized data management system. In order to strengthen international cooperation, all the acquired data has been converted and stored in the MDSplus servers. During the data system operation, there are some problems when a lot of client machines connect to a single MDSplus data server. Because the server process keeps the connection until the client closes its connection, a lot of server processes use a lot of network ports and consume a large amount of memory, so that the speed of access to data is very slow, but the CPU resource is not fully utilized. To improve data management system performance, many MDSplus servers will be installed on the blade server and form a server cluster to realize load balancing and high availability by using LVS and heartbeat technology. This paper will describe the details of the design and the test results.  相似文献   

18.
The real-time control system of RFX-mod, in operation since 2005, has been successful and has allowed several important achievements in the RFX physics research program. As a consequence of this fact, new control algorithms are under investigation, which are more demanding in terms of both enhanced computing power and reduced system latency, currently around 1.5 ms. For this reason, a major upgrade of the system is being considered, and a new architecture has been proposed, taking advantage of the rapid evolution of computer technology in the last years. The central component of the new architecture is a Linux-based multicore server, where individual cores replace the VME computers. The server is connected to the I/O via PCI-e based bus extenders, and every PCI-e connection is managed by a separate core. The system is supervised by MARTe, a software framework for real-time applications written in C++ and developed at JET and currently used for the JET vertical stabilization and in other fusion devices.  相似文献   

19.
To acquire multi-channel signals with 10 kHz sampling rate from various front-end sensors, a Data Acquisition Management System (DAMS) based on MDSplus was designed for the International Thermonuclear Experimental Reactor (ITER) Direct Current (DC) testing platform. Due to a large number of experimental data generated from long-pulse operation, it is very important to view and analyze experimental data online during operation. To meet the requirement of online data processing, slice storage and thumbnail technology were applied in DAMS. The long pulse data is gradually written in MDSplus database. The DAMS has been verified in the ITER DC power supply testing platform.  相似文献   

20.
介绍了一种基于MS-WindowsNT的新型辐射成像图像采集、处理和分析系统。其MS-WindowsNT的实时硬件驱动、局域网的数据库管理、以及优秀的人机交互界面具有独特之处。该系统也为今后信息系统(MIS)和图像存档和通讯系统(PACS)的发展奠定了良好的基础。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号