首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Java tool jScope has been widely used for years to display acquired waveform in MDSplus. The choice of the Java programming language for its implementation has been successful for several reasons among which the fact that Java supports a multiplatform environment and it is well suited for graphics and the management of network communication. jScope can be used both as a local and remote application. In the latter case, data are acquired via TCP/IP communication using the mdsip protocol. Exporting data in this way however introduces several security problems due to the necessity of opening firewall holes for the user ports. For this reason, and also due to the fact that JavaScript is becoming a widely used language for web applications, a new tool written in JavaScript and called WebScope has been developed for the visualization of MDSplus data in web browsers.Data communication is now achieved via http protocol using Asynchronous JavaScript and XML (AJAX) technology. At the server side, data access is carried out by a Python module that interacts with the web server via Web Server Gateway Interface (WSGI).When a data item, described by an MDSplus expression, is requested by the web browser for visualization, it is returned as a binary message and then handled by callback JavaScript functions activated by the web browser.Scalable Vector Graphics (SVG) technology is used to handle graphics within the web browser and to carry out the same interactive data visualization provided by jScope. In addition to mouse events, touch events are supported to provide interactivity also on touch screens. In this way, waveforms can be displayed and manipulated on tablet and smartphone devices.  相似文献   

2.
A long pulse discharge requires high throughput data acquisition. As more physics diagnostics with high sampling rate are applied and the pulse length becomes longer, the original EAST (Experimental Advanced Superconducting Tokamak) data system no longer satisfies the requirements of real-time data storage and quick data access. A new system was established to integrate various data acquisition hardware and software for easy expansion and management of the system. Slice storage mechanism in MDSplus is now being used for the continuous and quasi real-time data storage. For every data acquisition thread and process, sufficient network bandwidth is ensured. Moreover, temporal digitized data is cached in computer memory in doubly linked circular lists to avoid the possible data loss by the occasional temporal storage or transfer jam. These data are in turn archived in MDSplus format by using slice storage mechanism called “segments”. For the quick access of the archived data to the users, multiple data servers are used. These data servers are linked using LVS (Linux Virtual server) load balance technology to provide a safe, highly scalable and available data service.  相似文献   

3.
The first version of MDSplus was released in 1991 for VAX/VMS. Since then MDSplus has been progressively adopted in an increasing number of fusion experiments and its original implementation has been extended during these years to cover new requirements and toward a multi-platform implementation. Currently MDSplus is in use at more than 30 laboratories and is being used both for pulsed applications as well as for continuous data streaming for long lasting experiments. Thanks to its large user base, it has been possible to collect requirements driving the evolution of the system toward improved usability and better performance. An important recent feature of the MDSplus is its ability of handling a continuous stream of data, which is readily available as soon at it has been stored in the pulse files. Current development is oriented toward an improved modularity of MDSplus and the integration of new functionality.Improved modularity is achieved by moving away from monolithic implementation toward a plug-ins approach. This has already been achieved recently for the management of remote data access, where the original TCP/IP implementation can now be integrated with new user-provided network protocols. Following a similar approach, work is in progress to let new back-ends be integrated in the MDSplus data access layer. By decoupling the MDSplus data management from the disk data file format it is possible to integrate new solutions such as data cloud without affecting the user Application Programming Interface.  相似文献   

4.
A new data access system, H1DS, has been developed and deployed for the H-1 Heliac at the Australian Plasma Fusion Research Facility. The data system provides access to fusion data via a RESTful web service. With the URL acting as the API to the data system, H1DS provides a scalable and extensible framework which is intuitive to new users, and allows access from any internet connected device. The H1DS framework, originally designed to work with MDSplus, has a modular design which can be extended to provide access to alternative data storage systems.  相似文献   

5.
Support of the MDSplus data handling system has been enhanced by the addition of an automated build system which does nightly builds of MDSplus for many computer platforms producing software packages which can now be downloaded using a web browser or via package repositories suitable for automatic updating. The build system was implemented using an extensible continuous integration server product called Hudson which schedules software builds on a collection of VMware based virtual machines. New releases are created based on updates via the MDSplus cvs code repository and versioning are managed using cvs tags and branches. Currently stable, beta and alpha releases of MDSplus are maintained for eleven different platforms including Windows, MacOSX, RedHat Enterprise Linux, Fedora, Ubuntu and Solaris. For some of these platforms, MDSplus packaging has been broken into functional modules so users can pick and choose which MDSplus features they want to install. An added feature to the latest Linux based platforms is the use of package dependencies. When installing MDSplus from the package repositories, any additional required packages used by MDSplus will be installed automatically greatly simplifying the installation of MDSplus. This paper will describe the MDSplus package automated build and distribution system.  相似文献   

6.
The experimental data of J-TEXT tokamak are stored in the MDSplus database. The old J-TEXT data access system is based on the tools provided by MDSplus. Since the number of signals is huge, the data retrieval for an experiment is difficult. To solve this problem, the J-TEXT experimental data access and management system (DAMS) based on MDSplus has been developed. The DAMS left the old MDSplus system unchanged providing new tools, which can help users to handle all signals as well as to retrieve signals they need thanks to the user information requirements. The DAMS also offers users a way to create their jScope configuration files which can be downloaded to the local computer. In addition, the DAMS provides a JWeb-Scope tool to visualize the signal in a browser. JWeb-Scope adopts segment strategy to read massive data efficiently. Users can plot one or more signals on their own choice and zoom-in, zoom-out smoothly. The whole system is based on B/S model, so that the users only need of the browsers to access the DAMS. The DAMS has been tested and it has a better user experience. It will be integrated into the J-TEXT remote participation system later.  相似文献   

7.
The LHD data archiving system has newly selected GlusterFS distributed filesystem for the replacement of the present cloud storage software named “IznaStor/dSS”. Even though the prior software provided many favorable functionalities of hot plug and play node insertion, internal auto-replication of data files, and symmetric load balancing between all member nodes, it revealed a poor feature in recovering from an accidental malfunction of a storage node. Once a failure happened, the recovering process usually took at least several days or sometimes more than a week with a heavy cpu load. In some cases they fell into the so-called “split-brain” or “amnesia” condition, not to get recovered from it. Since the recovery time tightly depends on the capacity size of the fault node, individual HDD management is more desirable than large volumes of HDD arrays. In addition, the dynamic mutual awareness of data location information may be removed if some other static data distribution method can be applied. In this study, the candidate middleware of “OpenStack/Swift” and “GlusterFS” has been tested by using the real mass of LHD data for more than half a year, and finally GlusterFS has been selected to replace the present IznaStor. It has implemented very limited functionalities of cloud storage but a simplified RAID10-like structure, which may consequently provide lighter-weight read/write ability. Since the LABCOM data system is implemented to be independent of the storage structure, it is easy to plug off the IznaStor and on the new GlusterFS. The effective I/O speed is also confirmed to be on the same level as the estimated one from raw performance of disk hardware. This achievement may be informative to implement the ITER CODAC and the remote archiving system.  相似文献   

8.
The MDSplus data system has been in operation on several fusion machines since 1991 and it is currently in use at over 30 sites spread over 5 continents. A consequence is the extensive feedback provided by the MDSplus user community for bug fixes and improvements and therefore the evolution of MDSplus is keeping pace with the evolution in data acquisition and management techniques. In particular, the recent evolution of MDSplus has been driven by the change in the paradigm for data acquisition in long lasting plasma discharges, where a sustained data stream is transferred from the acquisition devices into the database. Several new features are currently available or are being implemented in MDSplus. The features already implemented include a comprehensive Object-Oriented interface to the system, the python support for data acquisition devices and the full integration in EPICS. Work is in progress for the integration of multiple protocols and security systems in remote data access, a new high level data view layer and a new version of the jScope tool for online visualization and the optimized visualization of very large signals.  相似文献   

9.
With the continuous renewal and increasing number of diagnostics, the EAST tokamak routinely generates ∼3 GB of raw data per pulse of the experiment, which is transferred to a centralized data management system. In order to strengthen international cooperation, all the acquired data has been converted and stored in the MDSplus servers. During the data system operation, there are some problems when a lot of client machines connect to a single MDSplus data server. Because the server process keeps the connection until the client closes its connection, a lot of server processes use a lot of network ports and consume a large amount of memory, so that the speed of access to data is very slow, but the CPU resource is not fully utilized. To improve data management system performance, many MDSplus servers will be installed on the blade server and form a server cluster to realize load balancing and high availability by using LVS and heartbeat technology. This paper will describe the details of the design and the test results.  相似文献   

10.
“Fusion virtual laboratory (FVL)” is the experiments’ collaboration platform covering multiple fusion projects in Japan. Major Japanese fusion laboratories and universities are mutually connected through the dedicated virtual private network, named SNET, on SINET4. It has 3 different categories; (i) LHD remote participation, (ii) bilateral experiments’ collaboration, and (iii) remote use of supercomputer. By extending the LABCOM data system developed at LHD, FVL supports (i) and (ii) so that it can deal with not only LHD data but also the data of two remote experiments: QUEST at Kyushu University and GAMMA10 at University of Tsukuba. FVL has applied the latest “cloud” technology for both data acquisition and storage architecture. It can provide us high availability and performance scalability of the whole system. With a well optimized TCP data transferring method, the unified data access platform for both experimental data and numerical computation results could become realistic on FVL. The FVL project will continue demonstrating the ITER-era international collaboration schemes and the necessary technology.  相似文献   

11.
As the discharge time becomes longer, the old EAST data system will not meet the requirements of EAST discharge experiment gradually. The problems mainly focus on continuous data acquisition and real-time data access.To meet the requirements of EAST 1000s discharges, the data acquisition mode, data storage structure and data access interface must be improved or reconstructed. Time slice mechanism is one of the popular solutions for continuous data acquisition and real-time data transfer. Data access of large signal files has been successfully solved via a hierarchical storage management system. Moreover, conventional “remote indexing” is replaced with “local indexing” in data access, which greatly enhanced data access speed. These improvements have been implemented and applied to EAST long-pulse plasma experiments which are described in this paper in detail.  相似文献   

12.
A data service system plays an indispensable role in HT-7 Tokamak experiment. Since the former system doesn't provide the function of timely data procession and analysis, and all client software are based on Windows, it can't fulfill virtual fusion laboratory for remote researchers. Therefore, a new system which is simplified by three kinds of data servers and one data analysis and visualization software tool has been developed. The data servers include a data acquisition server based on file system, an MDSplus server used as the central repository for analysis data, and a web server. Users who prefer the convenience of application that can be run in a Web Browser can easily access the experiment data without knowing X-Windows. In order to adjust instruments to control experiment the operators need to plot data duly as soon as they are gathered. To satisfy their requirement, an upgraded data analysis and visualization software GT-7 is developed. It not only makes 2D data visualization more efficient, but also it can be capable of processing, analyzing and displaying interactive 2D and 3D graph of raw. analyzed data by the format of ASCII, LZO and MDSplus.  相似文献   

13.
To acquire multi-channel signals with 10 kHz sampling rate from various front-end sensors, a Data Acquisition Management System (DAMS) based on MDSplus was designed for the International Thermonuclear Experimental Reactor (ITER) Direct Current (DC) testing platform. Due to a large number of experimental data generated from long-pulse operation, it is very important to view and analyze experimental data online during operation. To meet the requirement of online data processing, slice storage and thumbnail technology were applied in DAMS. The long pulse data is gradually written in MDSplus database. The DAMS has been verified in the ITER DC power supply testing platform.  相似文献   

14.
We present a new TCP/IP-based file transfer protocol which enables high-speed daisy-chain transfer. By using this protocol, we can send a file to a series of destination hosts simultaneously because intermediate hosts relay received file fragments to the next host. We achieved daisy-chain file transfer from Japan to Europe via USA at about 800 Mbps by using a prototype. The experimental result is also reported. A total link length of a data delivery network can be reduced by daisy chaining, so it enables cost-effective international data sharing.  相似文献   

15.
Plan of ITER remote experimentation center (REC) based on the broader approach (BA) activity of the joint program of Japan and Europe (EU) is described. Objectives of REC activity are (1) to identify the functions and solve the technical issues for the construction of the REC for ITER at Rokkasho, (2) to develop the remote experiment system and verify the functions required for the remote experiment by using the Satellite Tokamak (JT-60SA) facilities in order to make the future experiments of ITER and JT-60SA effectively and efficiently implemented, and (3) to test the functions of REC and demonstrate the total system by using JT-60SA and existing other facilities in EU. Preliminary identified items to be developed are (1) Functions of the remote experiment system, such as setting of experiment parameters, shot scheduling, real time data streaming, communication by video-conference between the remote-site and on-site, (2) Effective data transfer system that is capable of fast transfer of the huge amount of data between on-site and off-site and the network connecting the REC system, (3) Storage system that can store/access the huge amount of data, including database management, (4) Data analysis software for the data viewing of the diagnostic data on the storage system, (5) Numerical simulation for preparation and estimation of the shot performance and the analysis of the plasma shot. Detailed specifications of the above items will be discussed and the system will be made in these four years in collaboration with tokamak facilities of JT-60SA and EU tokamak, experts of informatics, activities of plasma simulation and ITER. Finally, the function of REC will be tested and the total system will be demonstrated by the middle of 2017.  相似文献   

16.
Each plasma physics laboratory has a proprietary scheme to control and data acquisition system. Usually, it is different from one laboratory to another. It means that each laboratory has its own way to control the experiment and retrieving data from the database. Fusion research relies to a great extent on international collaboration and this private system makes it difficult to follow the work remotely. The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The choice of MDSplus (Model Driven System plus) is proved by the fact that it is widely utilized, and the scientists from different institutions may use the same system in different experiments in different tokamaks without the need to know how each system treats its acquisition system and data analysis. Another important point is the fact that the MDSplus has a library system that allows communication between different types of language (JAVA, Fortran, C, C++, Python) and programs such as MATLAB, IDL, OCTAVE. In the case of tokamak TCABR interfaces (object of this paper) between the system already in use and MDSplus were developed, instead of using the MDSplus at all stages, from the control, and data acquisition to the data analysis. This was done in the way to preserve a complex system already in operation and otherwise it would take a long time to migrate. This implementation also allows add new components using the MDSplus fully at all stages.  相似文献   

17.
Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of “Web Tools” for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptation of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because of the familiar interface of the web browser, and not needing X-windows, or accounts and passwords, when used within our firewall. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.  相似文献   

18.
海量存储系统中磁带文件缓存管理   总被引:1,自引:0,他引:1  
针对磁带库的数据访问特点,根据高能物理领域的实际需求,提出了分组协同缓存策略.这一策略以缓存模型为基础,结合本地缓存与协同缓存的优点,把缓冲区分成不同的组,不同组的缓冲区之间相互独立;同一组的缓冲区由网络上不同节点上的磁盘组成,这些磁盘之间相互协作.还对该缓存策略的目录管理、更新算法、一致性等问题进行了详细分析.实验说明,该策略能很好的满足高能物理领域的数据处理和海量存储需求.  相似文献   

19.
The rapid progress in fast imaging gives new opportunities for fusion research. The data obtained by fast cameras play an important and ever-increasing role in analysis and understanding of plasma phenomena. The fast cameras produce a huge amount of data which creates considerable problems for acquisition, analysis, and storage.

We use a number of fast cameras on the Mega-Amp Spherical Tokamak (MAST). They cover several spectral ranges: broadband visible, infra-red and narrow band filtered for spectroscopic studies. These cameras are controlled by programs developed in-house. The programs provide full camera configuration and image acquisition in the MAST shot cycle.

Despite the great variety of image sources, all images should be stored in a single format. This simplifies development of data handling tools and hence the data analysis. A universal file format has been developed for MAST images which supports storage in both raw and compressed forms, using either lossless or lossy compression. A number of access and conversion routines have been developed for all languages used on MAST. Two movie-style display tools have been developed—Windows native and Qt based for Linux.

The camera control programs run as autonomous data acquisition units with full camera configuration set and stored locally. This allows easy porting of the code to other data acquisition systems. The software developed for MAST fast cameras has been adapted for several other tokamaks where it is in regular use.  相似文献   


20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号