首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
OpenDWG技术在工程图管理中的应用   总被引:1,自引:0,他引:1  
基于对OpenDWG技术的分析,提出了一种应用于工程图管理的技术方案,在图纸录入、评审、浏览、出图、交流等环节上给出了新的开发思路。概括性地介绍了工程图数据提取、标题栏信息自动赋值、小型图纸浏览器开发以及各种形式图纸转换工具的实现技术。基于OpenDWG技术开发的模块,能够以可执行程序或组件的形式嵌入在不同的工程图管理系统中,从实际使用过程中来看,效果比较理想。  相似文献   

2.
介绍了基于图形数据和文本数据一体化的包装机械图纸文档管理系统(PMDMS)。该系统采用文件系统辅助数据库管理技术和图文数据库的思想,实现了图纸和技术文档信息的统一管理。系统采用Delphi开发,界面良好,图文并举,操作方便。  相似文献   

3.
网络软件中间件管理是一种数据交换格式,允许在不同的系统或应用程序之间交换数据,通过一种网络软件中间件化的处理机构来遍历数据,每个网络软件中间件节点存储或处理数据并且将结果传输给相邻的节点。  相似文献   

4.
通过对特征技术的分析研究,基于特征技术的建模思想,设计了一个完整的产品数据信息模型,提出了一种零件CAD.CAPP集成系统数据信息模型,通过网络,实现了CAD/CAPP的信息交换与集成。  相似文献   

5.
张蓉 《流程工业》2014,(14):16-17
“在工业设计领域,30多年前,设计工作主要是在绘图板和图纸上完成的,数据被记录在一页页的图纸上;20多年前,有了设计软件,同时也产生了一些信息化的数据;今天,在工业设计领域,我们不仅要产生数据,还要让数据创造价值。”这是西门子工业业务领域工业自动化集团、自动化系统部副总裁,西门子COMOS工业软件部门负责人AndreasGeiss先生在7月10日举行的西门子工业论坛演讲上的开篇陈述。大数据的时代,数据正在改变着人类的生活和生产方式,西门子如何应用工业软件使工程数据为用户创造价值?来自西门子COMOS工业软件全球及中国区的管理者将给出他们的答案。  相似文献   

6.
程静 《中国科技博览》2014,(40):164-165
使用VBA对AUTOCAD进行二次开发,利用EXCEL作为定制工具,开发出一种完全可定制的图戳填写器,还可从已绘的CAD图纸中自动生成图纸列表,并可导dwg或xls格式的图纸目录。  相似文献   

7.
技术图纸的模拟/数字扫描技术美国最近推出了一种名为“SanVantagr“(扫描优势)的技术图纸模拟/数字扫描技术。该技术是一种革命性的新技术,它把照相复制和数字扫描这两种技术结合到了一起。由于该技术是以照相技术为基础的,故其效果远优于扫描编辑处理技...  相似文献   

8.
基于ebXML的电子商务标准体系结构的研究与应用   总被引:1,自引:0,他引:1  
在最近几年,可扩展置标语言(XML)得到了快速的发展,已经成为Internet上新的电子商务应用程序之间定义数据交换格式的首选方案。ebXML提供了一个基于XML的开放式技术体系,从而在应用到应用、应用到人或者人到应用的各种不同环境下,能够以一种一致和统一的方式利用XML技术开展电子商务数据的交换。  相似文献   

9.
制图,从其操作层面上讲不外乎两种因素。其一是非比侧因素,如图框、史字、符号、度量标注等。这些因素不受图纸本身比例的影响,具有一定的规范性。其二是图纸内容本身。通常情况下,图纸内容绘制在图纸上都经过一定比例的缩放。在AutoCAD制图中上述两种因素的处理方法有很大的不同,非比例因素在AutoCAD模型空间绘制时必须根据《房屋建筑CAD图统一规则》和图纸比例进行放大,而图纸内容的长度可在AutoCAD模型空间直接输入。  相似文献   

10.
工程信息计算机管理系统(MISED)——一个多介质信息混合管理系统北京电影机械研究所张轶明董建伟1.前言企事业单位的科技档案由图纸材料,以及围绕图纸而生成或与之相关的文字材料组成。图纸是一种重要的技术文件,是表达工程技术人员设计思想和描述产品结构的一...  相似文献   

11.
There are many cloud data security techniques and algorithms available that can be used to detect attacks on cloud data, but these techniques and algorithms cannot be used to protect data from an attacker. Cloud cryptography is the best way to transmit data in a secure and reliable format. Various researchers have developed various mechanisms to transfer data securely, which can convert data from readable to unreadable, but these algorithms are not sufficient to provide complete data security. Each algorithm has some data security issues. If some effective data protection techniques are used, the attacker will not be able to decipher the encrypted data, and even if the attacker tries to tamper with the data, the attacker will not have access to the original data. In this paper, various data security techniques are developed, which can be used to protect the data from attackers completely. First, a customized American Standard Code for Information Interchange (ASCII) table is developed. The value of each Index is defined in a customized ASCII table. When an attacker tries to decrypt the data, the attacker always tries to apply the predefined ASCII table on the Ciphertext, which in a way, can be helpful for the attacker to decrypt the data. After that, a radix 64-bit encryption mechanism is used, with the help of which the number of cipher data is doubled from the original data. When the number of cipher values is double the original data, the attacker tries to decrypt each value. Instead of getting the original data, the attacker gets such data that has no relation to the original data. After that, a Hill Matrix algorithm is created, with the help of which a key is generated that is used in the exact plain text for which it is created, and this Key cannot be used in any other plain text. The boundaries of each Hill text work up to that text. The techniques used in this paper are compared with those used in various papers and discussed that how far the current algorithm is better than all other algorithms. Then, the Kasiski test is used to verify the validity of the proposed algorithm and found that, if the proposed algorithm is used for data encryption, so an attacker cannot break the proposed algorithm security using any technique or algorithm.  相似文献   

12.
In the big data era, data unavailability, either temporary or permanent, becomes a normal occurrence on a daily basis. Unlike the permanent data failure, which is fixed through a background job, temporarily unavailable data is recovered on-the-fly to serve the ongoing read request. However, those newly revived data is discarded after serving the request, due to the assumption that data experiencing temporary failures could come back alive later. Such disposal of failure data prevents the sharing of failure information among clients, and leads to many unnecessary data recovery processes, (e.g. caused by either recurring unavailability of a data or multiple data failures in one stripe), thereby straining system performance.
To this end, this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes. GFCache employs a greedy caching approach of opportunism to promote not only the failed data, but also sequential failure-likely data in the same stripe. Additionally, GFCache includes a FARC (Failure ARC) catch replacement algorithm, which features a balanced consideration of failure recency, frequency to accommodate data corruption with good hit ratio. The stored data in GFCache is able to support fast read of the normal data access. Furthermore, since GFCache is a generic failure cache, it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters. Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache.  相似文献   

13.
Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this “scientific meta-data” and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML—a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains.At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.  相似文献   

14.
介绍了全站仪检测数据管理系统。本系统以FoxPro for Windows为基础,采用模块结构,具有对检测数据录入修改,自动转换,成果计算,查询显示等功能,实现了系统数据管理和处理的一体化。  相似文献   

15.
As a direct consequence of production systems' digitalization, high‐frequency and high‐dimensional data has become more easily available. In terms of data analysis, latent structures‐based methods are often employed when analyzing multivariate and complex data. However, these methods are designed for supervised learning problems when sufficient labeled data are available. Particularly for fast production rates, quality characteristics data tend to be scarcer than available process data generated through multiple sensors and automated data collection schemes. One way to overcome the problem of scarce outputs is to employ semi‐supervised learning methods, which use both labeled and unlabeled data. It has been shown that it is advantageous to use a semi‐supervised approach in case of labeled data and unlabeled data coming from the same distribution. In real applications, there is a chance that unlabeled data contain outliers or even a drift in the process, which will affect the performance of the semi‐supervised methods. The research question addressed in this work is how to detect outliers in the unlabeled data set using the scarce labeled data set. An iterative strategy is proposed using a combined Hotelling's T2 and Q statistics and applied using a semi‐supervised principal component regression (SS‐PCR) approach on both simulated and real data sets.  相似文献   

16.
陈子铨  杜选民 《声学技术》2010,29(6):583-586
为了适应声纳系统一体化的发展需求,提出了一种基于ATM(Asynchronous Transfer Mode)传输技术的网络化基元域数据传输系统。讨论了ATM技术在声纳数据实时采集与传输中的应用;给出了ATM技术传输效率与数据帧大小的关系;阐述了ATM数据传输节点、ATM交换机和ATM数据传输系统的设计。该系统已成功应用于某声纳中,为该声纳提供了一个总带宽为622Mbps的水声数据网络,并为多声纳间基元域数据共享提供了有效途径。  相似文献   

17.
The semi-supervised deep learning technology driven by a small part of labeled data and a large amount of unlabeled data has achieved excellent performance in the field of image processing. However, the existing semi-supervised learning techniques are all carried out under the assumption that the labeled data and the unlabeled data are in the same distribution, and its performance is mainly due to the two being in the same distribution state. When there is out-of-class data in unlabeled data, its performance will be affected. In practical applications, it is difficult to ensure that unlabeled data does not contain out-of-category data, especially in the field of Synthetic Aperture Radar (SAR) image recognition. In order to solve the problem that the unlabeled data contains out-of-class data which affects the performance of the model, this paper proposes a semi-supervised learning method of threshold filtering. In the training process, through the two selections of data by the model, unlabeled data outside the category is filtered out to optimize the performance of the model. Experiments were conducted on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, and compared with existing several state-of-the-art semi-supervised classification approaches, the superiority of our method was confirmed, especially when the unlabeled data contained a large amount of out-of-category data.  相似文献   

18.
An important issue for deep learning models is the acquisition of training of data. Without abundant data from a real production environment for training, deep learning models would not be as widely used as they are today. However, the cost of obtaining abundant real-world environment is high, especially for underwater environments. It is more straightforward to simulate data that is closed to that from real environment. In this paper, a simple and easy symmetric learning data augmentation model (SLDAM) is proposed for underwater target radiate-noise data expansion and generation. The SLDAM, taking the optimal classifier of an initial dataset as the discriminator, makes use of the structure of the classifier to construct a symmetric generator based on antagonistic generation. It generates data similar to the initial dataset that can be used to supplement training data sets. This model has taken into consideration feature loss and sample loss function in model training, and is able to reduce the dependence of the generation and expansion on the feature set. We verified that the SLDAM is able to data expansion with low calculation complexity. Our results showed that the SLDAM is able to generate new data without compromising data recognition accuracy, for practical application in a production environment.  相似文献   

19.
Outlier detection is a key research area in data mining technologies, as outlier detection can identify data inconsistent within a data set. Outlier detection aims to find an abnormal data size from a large data size and has been applied in many fields including fraud detection, network intrusion detection, disaster prediction, medical diagnosis, public security, and image processing. While outlier detection has been widely applied in real systems, its effectiveness is challenged by higher dimensions and redundant data attributes, leading to detection errors and complicated calculations. The prevalence of mixed data is a current issue for outlier detection algorithms. An outlier detection method of mixed data based on neighborhood combinatorial entropy is studied to improve outlier detection performance by reducing data dimension using an attribute reduction algorithm. The significance of attributes is determined, and fewer influencing attributes are removed based on neighborhood combinatorial entropy. Outlier detection is conducted using the algorithm of local outlier factor. The proposed outlier detection method can be applied effectively in numerical and mixed multidimensional data using neighborhood combinatorial entropy. In the experimental part of this paper, we give a comparison on outlier detection before and after attribute reduction. In a comparative analysis, we give results of the enhanced outlier detection accuracy by removing the fewer influencing attributes in numerical and mixed multidimensional data.  相似文献   

20.
工业 CT 断层数据的可视化技术研究   总被引:3,自引:0,他引:3  
基于工业CT的断层数据的反求技术已成为反求工程的一个新的发展方向,断层数据的可视化是这类反求技术的一个重要组成部分.本文研究了数据采集方法,以及采集数据的二值化处理和边界提取方法,并针对采集数据的特点,建立了一个动态的三维矩阵,提出了一种简化的体绘制的算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号