共查询到20条相似文献,搜索用时 125 毫秒
1.
2.
3.
网络软件中间件管理是一种数据交换格式,允许在不同的系统或应用程序之间交换数据,通过一种网络软件中间件化的处理机构来遍历数据,每个网络软件中间件节点存储或处理数据并且将结果传输给相邻的节点。 相似文献
4.
通过对特征技术的分析研究,基于特征技术的建模思想,设计了一个完整的产品数据信息模型,提出了一种零件CAD.CAPP集成系统数据信息模型,通过网络,实现了CAD/CAPP的信息交换与集成。 相似文献
5.
“在工业设计领域,30多年前,设计工作主要是在绘图板和图纸上完成的,数据被记录在一页页的图纸上;20多年前,有了设计软件,同时也产生了一些信息化的数据;今天,在工业设计领域,我们不仅要产生数据,还要让数据创造价值。”这是西门子工业业务领域工业自动化集团、自动化系统部副总裁,西门子COMOS工业软件部门负责人AndreasGeiss先生在7月10日举行的西门子工业论坛演讲上的开篇陈述。大数据的时代,数据正在改变着人类的生活和生产方式,西门子如何应用工业软件使工程数据为用户创造价值?来自西门子COMOS工业软件全球及中国区的管理者将给出他们的答案。 相似文献
6.
使用VBA对AUTOCAD进行二次开发,利用EXCEL作为定制工具,开发出一种完全可定制的图戳填写器,还可从已绘的CAD图纸中自动生成图纸列表,并可导dwg或xls格式的图纸目录。 相似文献
7.
8.
基于ebXML的电子商务标准体系结构的研究与应用 总被引:1,自引:0,他引:1
在最近几年,可扩展置标语言(XML)得到了快速的发展,已经成为Internet上新的电子商务应用程序之间定义数据交换格式的首选方案。ebXML提供了一个基于XML的开放式技术体系,从而在应用到应用、应用到人或者人到应用的各种不同环境下,能够以一种一致和统一的方式利用XML技术开展电子商务数据的交换。 相似文献
9.
制图,从其操作层面上讲不外乎两种因素。其一是非比侧因素,如图框、史字、符号、度量标注等。这些因素不受图纸本身比例的影响,具有一定的规范性。其二是图纸内容本身。通常情况下,图纸内容绘制在图纸上都经过一定比例的缩放。在AutoCAD制图中上述两种因素的处理方法有很大的不同,非比例因素在AutoCAD模型空间绘制时必须根据《房屋建筑CAD图统一规则》和图纸比例进行放大,而图纸内容的长度可在AutoCAD模型空间直接输入。 相似文献
10.
工程信息计算机管理系统(MISED)——一个多介质信息混合管理系统北京电影机械研究所张轶明董建伟1.前言企事业单位的科技档案由图纸材料,以及围绕图纸而生成或与之相关的文字材料组成。图纸是一种重要的技术文件,是表达工程技术人员设计思想和描述产品结构的一... 相似文献
11.
Ali Arshad Muhammad Nadeem Saman Riaz Syeda Wajiha Zahra Ashit Kumar Dutta Zaid Alzaid Rana Alabdan Badr Almutairi Sultan Almotairi 《计算机、材料和连续体(英文)》2023,75(2):3065-3089
There are many cloud data security techniques and algorithms available that can be used to detect attacks on cloud data, but these techniques and algorithms cannot be used to protect data from an attacker. Cloud cryptography is the best way to transmit data in a secure and reliable format. Various researchers have developed various mechanisms to transfer data securely, which can convert data from readable to unreadable, but these algorithms are not sufficient to provide complete data security. Each algorithm has some data security issues. If some effective data protection techniques are used, the attacker will not be able to decipher the encrypted data, and even if the attacker tries to tamper with the data, the attacker will not have access to the original data. In this paper, various data security techniques are developed, which can be used to protect the data from attackers completely. First, a customized American Standard Code for Information Interchange (ASCII) table is developed. The value of each Index is defined in a customized ASCII table. When an attacker tries to decrypt the data, the attacker always tries to apply the predefined ASCII table on the Ciphertext, which in a way, can be helpful for the attacker to decrypt the data. After that, a radix 64-bit encryption mechanism is used, with the help of which the number of cipher data is doubled from the original data. When the number of cipher values is double the original data, the attacker tries to decrypt each value. Instead of getting the original data, the attacker gets such data that has no relation to the original data. After that, a Hill Matrix algorithm is created, with the help of which a key is generated that is used in the exact plain text for which it is created, and this Key cannot be used in any other plain text. The boundaries of each Hill text work up to that text. The techniques used in this paper are compared with those used in various papers and discussed that how far the current algorithm is better than all other algorithms. Then, the Kasiski test is used to verify the validity of the proposed algorithm and found that, if the proposed algorithm is used for data encryption, so an attacker cannot break the proposed algorithm security using any technique or algorithm. 相似文献
12.
In the big data era, data unavailability, either temporary or permanent, becomes a normal occurrence on a daily basis. Unlike the permanent data failure, which is fixed through a background job, temporarily unavailable data is recovered on-the-fly to serve the ongoing read request. However, those newly revived data is discarded after serving the request, due to the assumption that data experiencing temporary failures could come back alive later. Such disposal of failure data prevents the sharing of failure information among clients, and leads to many unnecessary data recovery processes, (e.g. caused by either recurring unavailability of a data or multiple data failures in one stripe), thereby straining system performance.
To this end, this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes. GFCache employs a greedy caching approach of opportunism to promote not only the failed data, but also sequential failure-likely data in the same stripe. Additionally, GFCache includes a FARC (Failure ARC) catch replacement algorithm, which features a balanced consideration of failure recency, frequency to accommodate data corruption with good hit ratio. The stored data in GFCache is able to support fast read of the normal data access. Furthermore, since GFCache is a generic failure cache, it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters. Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache. 相似文献
To this end, this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes. GFCache employs a greedy caching approach of opportunism to promote not only the failed data, but also sequential failure-likely data in the same stripe. Additionally, GFCache includes a FARC (Failure ARC) catch replacement algorithm, which features a balanced consideration of failure recency, frequency to accommodate data corruption with good hit ratio. The stored data in GFCache is able to support fast read of the normal data access. Furthermore, since GFCache is a generic failure cache, it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters. Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache. 相似文献
13.
Ismet Celebi Robert A. Dragoset Karen J. Olsen Reinhold Schaefer Gary W. Kramer 《Journal of research of the National Institute of Standards and Technology》2010,115(1):15-22
Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this “scientific meta-data” and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML—a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains.At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML. 相似文献
14.
15.
Flavia D. Frumosu Murat Kulahci 《Quality and Reliability Engineering International》2019,35(5):1408-1423
As a direct consequence of production systems' digitalization, high‐frequency and high‐dimensional data has become more easily available. In terms of data analysis, latent structures‐based methods are often employed when analyzing multivariate and complex data. However, these methods are designed for supervised learning problems when sufficient labeled data are available. Particularly for fast production rates, quality characteristics data tend to be scarcer than available process data generated through multiple sensors and automated data collection schemes. One way to overcome the problem of scarce outputs is to employ semi‐supervised learning methods, which use both labeled and unlabeled data. It has been shown that it is advantageous to use a semi‐supervised approach in case of labeled data and unlabeled data coming from the same distribution. In real applications, there is a chance that unlabeled data contain outliers or even a drift in the process, which will affect the performance of the semi‐supervised methods. The research question addressed in this work is how to detect outliers in the unlabeled data set using the scarce labeled data set. An iterative strategy is proposed using a combined Hotelling's T2 and Q statistics and applied using a semi‐supervised principal component regression (SS‐PCR) approach on both simulated and real data sets. 相似文献
16.
17.
Linshan Shen Ye Tian Liguo Zhang Guisheng Yin Tong Shuai Shuo Liang Zhuofei Wu 《计算机、材料和连续体(英文)》2022,73(1):465-476
The semi-supervised deep learning technology driven by a small part of labeled data and a large amount of unlabeled data has achieved excellent performance in the field of image processing. However, the existing semi-supervised learning techniques are all carried out under the assumption that the labeled data and the unlabeled data are in the same distribution, and its performance is mainly due to the two being in the same distribution state. When there is out-of-class data in unlabeled data, its performance will be affected. In practical applications, it is difficult to ensure that unlabeled data does not contain out-of-category data, especially in the field of Synthetic Aperture Radar (SAR) image recognition. In order to solve the problem that the unlabeled data contains out-of-class data which affects the performance of the model, this paper proposes a semi-supervised learning method of threshold filtering. In the training process, through the two selections of data by the model, unlabeled data outside the category is filtered out to optimize the performance of the model. Experiments were conducted on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, and compared with existing several state-of-the-art semi-supervised classification approaches, the superiority of our method was confirmed, especially when the unlabeled data contained a large amount of out-of-category data. 相似文献
18.
An important issue for deep learning models is the acquisition of training of data. Without abundant data from a real production environment for training, deep learning models would not be as widely used as they are today. However, the cost of obtaining abundant real-world environment is high, especially for underwater environments. It is more straightforward to simulate data that is closed to that from real environment. In this paper, a simple and easy symmetric learning data augmentation model (SLDAM) is proposed for underwater target radiate-noise data expansion and generation. The SLDAM, taking the optimal classifier of an initial dataset as the discriminator, makes use of the structure of the classifier to construct a symmetric generator based on antagonistic generation. It generates data similar to the initial dataset that can be used to supplement training data sets. This model has taken into consideration feature loss and sample loss function in model training, and is able to reduce the dependence of the generation and expansion on the feature set. We verified that the SLDAM is able to data expansion with low calculation complexity. Our results showed that the SLDAM is able to generate new data without compromising data recognition accuracy, for practical application in a production environment. 相似文献
19.
Outlier detection is a key research area in data mining technologies, as outlier detection can identify data inconsistent within a data set. Outlier detection aims to find an abnormal data size from a large data size and has been applied in many fields including fraud detection, network intrusion detection, disaster prediction, medical diagnosis, public security, and image processing. While outlier detection has been widely applied in real systems, its effectiveness is challenged by higher dimensions and redundant data attributes, leading to detection errors and complicated calculations. The prevalence of mixed data is a current issue for outlier detection algorithms. An outlier detection method of mixed data based on neighborhood combinatorial entropy is studied to improve outlier detection performance by reducing data dimension using an attribute reduction algorithm. The significance of attributes is determined, and fewer influencing attributes are removed based on neighborhood combinatorial entropy. Outlier detection is conducted using the algorithm of local outlier factor. The proposed outlier detection method can be applied effectively in numerical and mixed multidimensional data using neighborhood combinatorial entropy. In the experimental part of this paper, we give a comparison on outlier detection before and after attribute reduction. In a comparative analysis, we give results of the enhanced outlier detection accuracy by removing the fewer influencing attributes in numerical and mixed multidimensional data. 相似文献