首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
在Novell无盘工作站下怎样用好安易会计网络软件   总被引:1,自引:0,他引:1  
无盘工作站既具有资源共享,又具有硬件配置价格低廉、便于管理、防止病毒入侵等优点,因而是教学和实际使用电算化软件中常被采用的一种网络形式、安易会计软件是一种易学、易用、功能齐全、稳定性和安全性都较好的电算化软件。作者根据多年的会计电算化教学和对会计软件维护中的经验,谈谈如何用好安易会计网络软件。1网络环境的设置1)生成无盘工作站远程映射文件NET$DOS.SYS时CONFIC.SYS应包含如下语句:DEVICE=HIMEM。SYSFILEs=40BUFFDRS=20DOS=HIGN。UMB2)做好…  相似文献   

2.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该网络中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

3.
提出了一种新的结构自适应的径向基函数(RBF)神经网络模型。在该模型中,自组织映射(SOM)神经网络作为聚类网络,采用无监督学习算法对输入样本进行自组织分类,并将分类中心及其对应的权值向量传递给RBF神经网络,分别作为径向基函数的中心和相应的权值向量;RBF神经网络作为基础网络,采用高斯函数实现输入层到隐层的非线性映射,输出层则采用有监督学习算法训练网络的权值,从而实现输入层到输出层的非线性映射。通过对字母数据集进行仿真,表明该网络具有较好的性能。  相似文献   

4.
基于免疫遗传算法的多层前向神经网络设计   总被引:14,自引:0,他引:14  
罗菲  何明一 《计算机应用》2005,25(7):1661-1662
利用一种基于免疫功能的遗传算法,设计多层前向神经网络,用于实现多层前向神经网络结构的确定和权值空间的搜索。仿真实验结果显示该算法具有比遗传算法和动量BP算法更好的全局收敛性和快速学习网络权值的能力。  相似文献   

5.
在MS-DOS6诸多的新功能中,最富特色的莫过于MULTICONFIG,即它在CON-FIG.SYS中所具有的多重可选择性。本文通过实例介绍了这一功能的应用。  相似文献   

6.
赵建华  李伟华 《计算机工程》2012,38(12):110-111
为提高自组织特征映射(SOM)神经网络的分类性能,提出一种有监督SOM神经网络(SSOM)。在输入层和竞争层的基础上增加输出层,根据输入样本的不同预测类别,选取不同的公式调整权值,并训练网络。通过2个权值的组合,实现对样本类别的回归和统计。基于KDD CUP99入侵检测数据集的实验结果表明,与其他SOM网络相比,SSOM具有更好的分类性能和更高的入侵检测率。  相似文献   

7.
网络文件系统NFS是一种运行机制 ,它通过网络提供给不同操作平台上的用户共享一个文件系统。这样 ,用户的文件系统就由原来的单一的本地系统 ,扩展到由本地文件系统加一个或多个远程文件系统构成的虚拟系统。用户感觉不到远程文件系统与本地文件系统的区别。网络文件系统NFS由于具有网络操作系统文件服务器的功能 ,且使用维护均比较方便 ,从而也被广泛地应用在各系统中。  1 工作原理NFS包括NFS服务器和NFS客户两部分 ,采用星型拓扑结构连接。NFS服务器是中心 ,NFS客户是端点。见图 1。图 2NFS的拓扑结构·NF…  相似文献   

8.
本文介绍了NETWARE网络中事务跟踪处理系统TTS的实现原理,并以FOXPRO数据库为例,阐述了如何利用NETWARE网络的TTS实现数据的一致性和完整性。  相似文献   

9.
在FoxBASE+V2.10下实现热键输入太原重型机器厂阎亚林本文介绍了一种在对数据库进行汉字信息的录入和修改过程中采用的热键输入法。在实现方法上,主要是利用了命令ONKEY=(expN)[COMMAND],函数SYS(18)和热键数据库。命令ONK...  相似文献   

10.
杨洪 《微机发展》1997,7(1):8-10
PC-NFS是运行在PC机上的一个TCP/IP软件,它提供了多种应用编程接口来实现用户的网络应用,本文介绍了一种用PC-NFS工具来实现DOS和UNIX间任务级实时通信的方法。  相似文献   

11.
Several classical techniques have evolved over the years for the purpose of denoising binary images. But the main disadvantages of these classical techniques lie in that an a priori information regarding the noise characteristics is required during the extraction process. Among the intelligent techniques in vogue, the multilayer self organizing neural network (MLSONN) architecture is suitable for binary image preprocessing tasks.In this article, we propose a quantum version of the MLSONN architecture. Similar to the MLSONN architecture, the proposed quantum multilayer self organizing neural network (QMLSONN) architecture comprises three processing layers viz., input, hidden and output layers. The different layers contains qubit based neurons. Single qubit rotation gates are designated as the network layer interconnection weights. A quantum measurement at the output layer destroys the quantum states of the processed information thereby inducing incorporation of linear indices of fuzziness as the network system errors used to adjust network interconnection weights through a quantum backpropagation algorithm.Results of application of the proposed QMLSONN are demonstrated on a synthetic and a real life binary image with varying degrees of Gaussian and uniform noise. A comparative study with the results obtained with the MLSONN architecture and the supervised Hopfield network reveals that the QMLSONN outperforms the MLSONN and the Hopfield network in terms of the computation time.  相似文献   

12.
A new multilayer incremental neural network (MINN) architecture and its performance in classification of biomedical images is discussed. The MINN consists of an input layer, two hidden layers and an output layer. The first stage between the input and first hidden layer consists of perceptrons. The number of perceptrons and their weights are determined by defining a fitness function which is maximized by the genetic algorithm (GA). The second stage involves feature vectors which are the codewords obtained automaticaly after learning the first stage. The last stage consists of OR gates which combine the nodes of the second hidden layer representing the same class. The comparative performance results of the MINN and the backpropagation (BP) network indicates that the MINN results in faster learning, much simpler network and equal or better classification performance.  相似文献   

13.
陈华伟  年晓玲  靳蕃 《计算机应用》2006,26(5):1106-1108
提出一种新的前向神经网络的学习算法,该算法在正向和反向阶段均可以对不同的层间的权值进行必要的调整,在正向阶段按最小范数二乘解原则确定连接隐层与输出层的权值,反向阶段则按误差梯度下降原则调整通连接输入层与隐层间的权值,具有很快的学习能力和收敛速度,并且能在一定的程度上保证所训练神经网络的泛化能力,实验结果初步验证了新算法的性能。  相似文献   

14.
神经网络传递函数的功能分析与仿真研究   总被引:1,自引:0,他引:1  
从函数映射的角度,以三层前向神经网络为例,对神经网络的映射关系进行了分析,提出前向神经网络的映射关系可以视为一种广义级数展开,展开系数就是隐层与输出层的连接权,而传递函数的作用在于提供一个“母基”,它与输入到隐层间的连接权一起,构造了不同的展开函数。根据这一理论,着重对神经网络传递函数在映射中的作用进行了分析,指出如果灵活选择多个复合传递函数,可以使网络以更少的参数、更少的隐节点,完成从输入到输出的映射,从而提高神经网络的泛化能力。利用遗传优化对一个两类分类问题的训练仿真结果表明,采用混合传递函数,的确能够以更少的隐节点实现所需要的映射关系,网络结构的复杂度低,泛化能力也更好。该结果也进一步证实了神经网络映射关系的广义级数展开的正确性。  相似文献   

15.
Enrique  Ren 《Neurocomputing》2007,70(16-18):2735
The selection of weights of the new hidden units for sequential feed-forward neural networks (FNNs) usually involves a non-linear optimization problem that cannot be solved analytically in the general case. A suboptimal solution is searched heuristically. Most models found in the literature choose the weights in the first layer that correspond to each hidden unit so that its associated output vector matches the previous residue as best as possible. The weights in the second layer can be either optimized (in a least-squares sense) or not. Several exceptions to the idea of matching the residue perform an (implicit or explicit) orthogonalization of the output vectors of the hidden units. In this case, the weights in the second layer are always optimized. An experimental study of the aforementioned approaches to select the weights for sequential FNNs is presented. Our results indicate that the orthogonalization of the output vectors of the hidden units outperforms the strategy of matching the residue, both for approximation and generalization purposes.  相似文献   

16.
A structured-based neural network (NN) with backpropagation through structure (BPTS) algorithm is conducted for image classification in organizing a large image database, which is a challenging problem under investigation. Many factors can affect the results of image classification. One of the most important factors is the architecture of a NN, which consists of input layer, hidden layer and output layer. In this study, only the numbers of nodes in hidden layer (hidden nodes) of a NN are considered. Other factors are kept unchanged. Two groups of experiments including 2,940 images in each group are used for the analysis. The assessment of the effects for the first group is carried out with features described by image intensities, and, the second group uses features described by wavelet coefficients. Experimental results demonstrate that the effects of the numbers of hidden nodes on the reliability of classification are significant and non-linear. When the number of hidden nodes is 17, the classification rate on training set is up to 95%, and arrives at 90% on the testing set. The results indicate that 17 is an appropriate choice for the number of hidden nodes for the image classification when a structured-based NN with BPTS algorithm is applied.  相似文献   

17.
PID神经元网络多变量控制系统分析   总被引:62,自引:0,他引:62  
舒怀林 《自动化学报》1999,25(1):105-111
PID神经元网络是一种新的多层前向神经元网络,其隐含层单元分别为比例(P)、积 分(I)、微分(D)单元,各层神经元个数、连接方式、连接权初值是按PID控制规律的基本原则 确定的,它可以用于多变量系统的解耦控制.给出了PID神经元网络的结构形式和计算方 法,从理论上证明了PID神经元网络多变量控制系统的收敛件和稳定性,通过计算机仿真证 明了PID神经元网络具有良好的自学习和自适应解耦控制性能.  相似文献   

18.
Extreme learning machine (ELM), as an emergent technique for training feed-forward neural networks, has shown good performances on various learning domains. This paper investigates the impact of random weights during the training of ELM. It focuses on the randomness of weights between input and hidden layers, and the dimension change from input layer to hidden layer. The direct motivation is to verify as to whether during the training of ELM, the randomly assigned weights exert some positive effects. Experimentally we show that for many classification and regression problems, the dimension increase caused by random weights in ELM has a performance better than the dimension increase caused by some kernel mappings. We assume that via the random transformation, output-samples are more concentrate than input-samples which will make the learning more efficient.  相似文献   

19.
We introduce a fuzzy rough granular neural network (FRGNN) model based on the multilayer perceptron using a back-propagation algorithm for the fuzzy classification of patterns. We provide the development strategy of the network mainly based upon the input vector, initial connection weights determined by fuzzy rough set theoretic concepts, and the target vector. While the input vector is described in terms of fuzzy granules, the target vector is defined in terms of fuzzy class membership values and zeros. Crude domain knowledge about the initial data is represented in the form of a decision table, which is divided into subtables corresponding to different classes. The data in each decision table is converted into granular form. The syntax of these decision tables automatically determines the appropriate number of hidden nodes, while the dependency factors from all the decision tables are used as initial weights. The dependency factor of each attribute and the average degree of the dependency factor of all the attributes with respect to decision classes are considered as initial connection weights between the nodes of the input layer and the hidden layer, and the hidden layer and the output layer, respectively. The effectiveness of the proposed FRGNN is demonstrated on several real-life data sets.  相似文献   

20.
In this paper, a hybrid method is proposed to control a nonlinear dynamic system using feedforward neural network. This learning procedure uses different learning algorithm separately. The weights connecting the input and hidden layers are firstly adjusted by a self organized learning procedure, whereas the weights between hidden and output layers are trained by supervised learning algorithm, such as a gradient descent method. A comparison with backpropagation (BP) shows that the new algorithm can considerably reduce network training time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号