首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
作为未来5G通信的核心技术之一,大规模多输入多输出(multiple-input multiple-output,MIMO)技术获得了广泛的研究。但是,"大规模"带来显著性能增益的同时,也给接收机设计带来了挑战,尤其是考虑到资源和成本限制,基站天线在满足性能需求的同时,需要尽可能少。论文首先讨论了MIMO情景下的传统检测算法,如最大似然(maximum likelihood, ML)检测算法、迫零(zero-forcing, ZF)检测算法及线性最小均方误差(linear minimum mean square error, LMMSE)检测算法等。仿真结果表明最优的ML算法的复杂度随着用户数指数增加。在接收天线数不是充分多时,次优的ZF和LMMSE算法都会有显著的性能损失。针对这一问题,讨论了基于深度学习框架的解决方案,包括目前已有的LAMP(learned approximate message passing)检测算法和神经网络DetNet算法;基于全连接网络结构做了初步探索。经过对它们的仿真比较,发现基于深度神经网络的MIMO检测算法,确实可以提升传统检测算法的性能;但对神经网络系数的优化,可能会导致较高的训练复杂度,论文讨论了可能的解决方法。  相似文献   

2.
局部修复码(LRCs)作为纠删码的一种,被广泛应用于分布式存储系统中;针对目前局部修复码在满足最小距离最优界时码率不高且局部性的参数限制大的问题,提出一种基于方形网络的最优局部修复码构造方法,利用方形网络构造局部修复码的校验矩阵,从校验矩阵入手构造局部修复码,此码达到了最优码率界,但是其局部性有所限制;进一步将方形网络水平方向和垂直方向上的关联矩阵进行扩展,用方形网络的扩展矩阵构造局部修复码的校验矩阵,所构造的局部修复码在局部性上的性能有所提升;和现有局部修复码进行对比分析,构造的局部修复码不仅满足最小距离最优界,同时达到了局部修复码的码率最优界,可适用于任意局部性的情况,对二元最优局部修复码的构造具有借鉴意义。  相似文献   

3.
In the above paper by Ergezinger and Thomsen (ibid. vol.6 (1991)), a new method for training multilayer perceptron, called optimization layer by layer (OLL), was introduced. The present paper analyzes the performance of OLL. We show, from theoretical considerations, that the amount of work required with OLL-learning scales as the third power of the network size, compared with the square of the network size for commonly used conjugate gradient (CG) training algorithms. This theoretical estimate is confirmed through a practical example. Thus, although OLL is shown to function very well for small neural networks (less than about 500 weights per layer), it is slower than CG for large neural networks. Next, we show that OLL does not always improve on the accuracy that can be obtained with CG. It seems that the final accuracy that can be obtained depends strongly on the initial network weights.  相似文献   

4.
For photon emission tomography, the maximum likelihood (ML) estimator for image reconstruction is generally solution to a nonlinear equation involving the vector of measured data. No explicit closed-form solution is known in general for such a nonlinear ML equation, and numerical resolution is usually implemented, with a very popular iterative method formed by the expectation-maximization algorithm. The numerical character of such resolutions usually makes it difficult to obtain a general characterization of the performance of the ML solution. We show that the nonlinear ML equation can be replaced by an equivalent system of two dual linear equations nonlinearly coupled. This formulation allows us to exhibit explicit (to some extent) forms for the solutions to the ML equation, in general conditions corresponding to the various possible configurations of the imaging system, and to characterize their performance with expressions for the mean-squared error, bias and Cramér-Rao bound. The approach especially applies to characterize the ML solutions obtained numerically, and offers a theoretical framework to contribute to better appreciation of the capabilities of ML reconstruction in photon emission tomography.  相似文献   

5.
The performance of all failure detection, isolation, and accommodation (DIA) algorithms is influenced by the presence of model uncertainty. The authors present a unique framework to incorporate a knowledge of modeling error in the analysis and design of failure detection systems. A concept is introduced called the threshold selector, which is a nonlinear inequality whose solution defines the set of detectable sensor failure signals. It identifies the optimal threshold to be used in innovations-based DIA algorithms. The optimal threshold is shown to be a function of the bound on modeling errors, the noise properties, the speed of DIA filters and the classes of reference and failure signals. The size of the smallest detectable failure is also determined. The results are applied to a multivariable turbofan jet engine example, which demonstrates improvements compared to previous studies  相似文献   

6.
Neural networks are often employed as tools in classification tasks. The use of large networks increases the likelihood of the task's being learned, although it may also lead to increased complexity. Pruning is an effective way of reducing the complexity of large networks. We present discriminant components pruning (DCP), a method of pruning matrices of summed contributions between layers of a neural network. Attempting to interpret the underlying functions learned by the network can be aided by pruning the network. Generalization performance should be maintained at its optimal level following pruning. We demonstrate DCP's effectiveness at maintaining generalization performance, applicability to a wider range of problems, and the usefulness of such pruning for network interpretation. Possible enhancements are discussed for the identification of the optimal reduced rank and inclusion of nonlinear neural activation functions in the pruning algorithm.  相似文献   

7.
Ordered binary decision diagrams (OBDDs) are one of the most common dynamic data structures for Boolean functions. Among the many areas of application are hardware verification, model checking, and symbolic graph algorithms. Threshold functions are the basic functions for discrete neural networks and are used as building blocks in the design of some symbolic graph algorithms. In this paper the first exponential lower bound on the size of a more general model than OBDDs and the first nontrivial asymptotically optimal bound on the OBDD size for a threshold function are presented.  相似文献   

8.
The extreme learning machine (ELM), a single-hidden layer feedforward neural network algorithm, was tested on nine environmental regression problems. The prediction accuracy and computational speed of the ensemble ELM were evaluated against multiple linear regression (MLR) and three nonlinear machine learning (ML) techniques – artificial neural network (ANN), support vector regression and random forest (RF). Simple automated algorithms were used to estimate the parameters (e.g. number of hidden neurons) needed for model training. Scaling the range of the random weights in ELM improved its performance. Excluding large datasets (with large number of cases and predictors), ELM tended to be the fastest among the nonlinear models. For large datasets, RF tended to be the fastest. ANN and ELM had similar skills, but ELM was much faster than ANN except for large datasets. Generally, the tested ML techniques outperformed MLR, but no single method was best for all the nine datasets.  相似文献   

9.
为设计出简便高效的方法搜索最优神经网络结构,提出一种改进鲸鱼优化算法的浅层神经网络搜索方法.该方法首先通过模拟鲸鱼狩猎的个体偏好行为和鲸鱼群位置移动的非线性权值更新机制对传统鲸鱼优化算法进行改进;然后将改进鲸鱼优化算法作为浅层BP神经网络结构搜索策略,构建基于浅层BP神经网络的最优网络结构的权值阈值搜索优化方法.数值实验结果表明,改进的鲸鱼优化算法不仅在求解不同维复杂函数上具有良好的寻优性能,而且通过改进鲸鱼优化算法搜索得到的最优浅层BP神经网络结构在回归任务中具有更好的预测精度和泛化性能.  相似文献   

10.
针对海洋溶菌酶(Marine Lysozyme,ML)发酵过程菌体浓度在线检测难以实现,离线测量不能反映发酵过程当前变化等问题,提出了一种基于改进磷虾群—自适应模糊神经网络软测量(HLKH-ANFIS)建模方法。首先利用自适应莱维飞行策略对传统KH进行改进,从而提升算法的全局搜索能力;同时利用跳变技术(HOT)对KH算法位置更新公式进行改进,提高算法的局部寻优能力,然后利用改进的KH算法对自适应模糊神经网络反馈进行优化,改善其过度修正和计算量大的问题;最后建立基于HLKH-ANFIS的海洋溶菌酶发酵过程菌体浓度软测量预测模型,仿真分析表明:相较于KH-ANFIS预测模型,HLKH-ANFIS模型的误差较小,具有更好的预测能力,能够满足ML发酵关键参量的在线预测需要。  相似文献   

11.
Proposes a data classification method based on the tolerant rough set that extends the existing equivalent rough set. A similarity measure between two data is described by a distance function of all constituent attributes and they are defined to be tolerant when their similarity measure exceeds a similarity threshold value. The determination of optimal similarity threshold value is very important for accurate classification. So, we determine it optimally by using the genetic algorithm (GA), where the goal of evolution is to balance two requirements such that: 1) some tolerant objects are required to be included in the same class as many as possible; and 2) some objects in the same class are required to be tolerant as much as possible. After finding the optimal similarity threshold value, a tolerant set of each object is obtained and the data set is grouped into the lower and upper approximation set depending on the coincidence of their classes. We propose a two-stage classification method such that all data are classified by using the lower approximation at the first stage and then the nonclassified data at the first stage are classified again by using the rough membership functions obtained from the upper approximation set. We apply the proposed classification method to the handwritten numeral character classification problem and compare its classification performance and learning time with those of the feedforward neural network's backpropagation algorithm  相似文献   

12.
Intrigued by some recent results on impulse response estimation by kernel and nonparametric techniques, we revisit the old problem of transfer function estimation from input–output measurements. We formulate a classical regularization approach, focused on finite impulse response (FIR) models, and find that regularization is necessary to cope with the high variance problem. This basic, regularized least squares approach is then a focal point for interpreting other techniques, like Bayesian inference and Gaussian process regression. The main issue is how to determine a suitable regularization matrix (Bayesian prior or kernel). Several regularization matrices are provided and numerically evaluated on a data bank of test systems and data sets. Our findings based on the data bank are as follows. The classical regularization approach with carefully chosen regularization matrices shows slightly better accuracy and clearly better robustness in estimating the impulse response than the standard approach–the prediction error method/maximum likelihood (PEM/ML) approach. If the goal is to estimate a model of given order as well as possible, a low order model is often better estimated by the PEM/ML approach, and a higher order model is often better estimated by model reduction on a high order regularized FIR model estimated with careful regularization. Moreover, an optimal regularization matrix that minimizes the mean square error matrix is derived and studied. The importance of this result lies in that it gives the theoretical upper bound on the accuracy that can be achieved for this classical regularization approach.  相似文献   

13.
徐亮 《微型电脑应用》2022,(1):142-144,149
神经网络的连接阈值以及权值直接影响数据库重复记录的检测效果,当前方法无法找到最优的神经网络的连接阈值和权值,导致数据库重复记录检测偏差比较大,并且数据库重复记录检测效率低,为了获得更优的数据库重复记录检测结果,提出了量子粒子群算法优化神经网络算法的数据库重复记录检测方法.首先分析当前数据库重复记录检测研究进展,并提取数...  相似文献   

14.
Criteria for evaluating the classification reliability of a neural classifier and for accordingly making a reject option are proposed. Such an option, implemented by means of two rules which can be applied independently of topology, size, and training algorithms of the neural classifier, allows one to improve the classification reliability. It is assumed that a performance function P is defined which, taking into account the requirements of the particular application, evaluates the quality of the classification in terms of recognition, misclassification, and reject rates. Under this assumption the optimal reject threshold value, determining the best trade-off between reject rate and misclassification rate, is the one for which the function P reaches its absolute maximum. No constraints are imposed on the form of P, but the ones necessary in order that P actually measures the quality of the classification process. The reject threshold is evaluated on the basis of some statistical distributions characterizing the behavior of the classifier when operating without reject option; these distributions are computed once the training phase of the net has been completed. The method has been tested with a neural classifier devised for handprinted and multifont printed characters, by using a database of about 300000 samples. Experimental results are discussed.  相似文献   

15.
Schmitt  Michael 《Machine Learning》1999,37(2):131-141
A neural network is said to be nonoverlapping if there is at most one edge outgoing from each node. We investigate the number of examples that a learning algorithm needs when using nonoverlapping neural networks as hypotheses. We derive bounds for this sample complexity in terms of the Vapnik-Chervonenkis dimension. In particular, we consider networks consisting of threshold, sigmoidal and linear gates. We show that the class of nonoverlapping threshold networks and the class of nonoverlapping sigmoidal networks on n inputs both have Vapnik-Chervonenkis dimension (nlog n). This bound is asymptotically tight for the class of nonoverlapping threshold networks. We also present an upper bound for this class where the constants involved are considerably smaller than in a previous calculation. Finally, we argue that the Vapnik-Chervonenkis dimension of nonoverlapping threshold or sigmoidal networks cannot become larger by allowing the nodes to compute linear functions. This sheds some light on a recent result that exhibited neural networks with quadratic Vapnik-Chervonenkis dimension.  相似文献   

16.
We address the problem of locating multiple nodes in a wireless sensor network with the use of received signal strength (RSS) measurements. In RSS based positioning, transmit power and path-loss factor are two environment dependent parameters which may be uncertain or unknown. For unknown transmit powers, we devise two-step weighted least squares (WLS) and maximum likelihood (ML) algorithms for node localization. The mean square error of the former is analyzed in the presence of zero-mean white Gaussian disturbances. When both transmit powers and path-loss factors are unavailable, two nonlinear least squares estimators, namely, the direct ML approach and combination of linear least squares and ML algorithm, are developed. Numerical examples are also included to evaluate the localization accuracy of the proposed estimators by comparing with two existing node positioning methods and the Cramér–Rao lower bound.  相似文献   

17.
Limits of Learning-Based Superresolution Algorithms   总被引:2,自引:0,他引:2  
Learning-based superresolution (SR) is a popular SR technique that uses application dependent priors to infer the missing details in low resolution images (LRIs). However, their performance still deteriorates quickly when the magnification factor is only moderately large. This leads us to an important problem: “Do limits of learning-based SR algorithms exist?” This paper is the first attempt to shed some light on this problem when the SR algorithms are designed for general natural images. We first define an expected risk for the SR algorithms that is based on the root mean squared error between the superresolved images and the ground truth images. Then utilizing the statistics of general natural images, we derive a closed form estimate of the lower bound of the expected risk. The lower bound only involves the covariance matrix and the mean vector of the high resolution images (HRIs) and hence can be computed by sampling real images. We also investigate the sufficient number of samples to guarantee an accurate estimate of the lower bound. By computing the curve of the lower bound w.r.t. the magnification factor, we could estimate the limits of learning-based SR algorithms, at which the lower bound of the expected risk exceeds a relatively large threshold. We perform experiments to validate our theory. And based on our observations we conjecture that the limits may be independent of the size of either the LRIs or the HRIs.  相似文献   

18.
Declustering techniques reduce query response time through parallel I/O by distributing data among multiple devices. Except for a few cases it is not possible to find declustering schemes that are optimal for all spatial range queries. As a result of this, most of the research on declustering has focused on finding schemes with low worst case additive error. However, additive error based schemes have many limitations including lack of progressive guarantees and existence of small non-optimal queries. In this paper, we take a different approach and propose threshold-based declustering. We investigate the threshold k such that all spatial range queries with ?k buckets are optimal. Upper bound on threshold is analyzed using bound diagrams and a number theoretic algorithm is proposed to find schemes with high threshold value. Threshold-based schemes have many advantages: they have low worst-case additive error, provide progressive guarantees by dividing larger queries into subqueries with ?k buckets, can be used to compare replicated declustering schemes and render many large complementary queries optimal.  相似文献   

19.
向最优控制信号中加入编码信号是实现信息物理系统(Cyber physical system, CPS)重放攻击检测的有效方法, 但会造成系统控制性能的损失. 如何在保证重放攻击检测率条件下降低系统的控制性能损失是一个值得研究的问题. 为此, 提出一种基于辅助信息补偿的控制信号编码检测方法, 通过向测量值添加辅助信号补偿控制编码信号对最优状态估计的影响. 首先, 证明了此方案下重放攻击的可检测性, 导出检测率的上界与检测函数阈值间的定量关系. 其次, 证明了加入辅助信号后系统控制信号与未添加编码信息时相同, 之前时刻的控制编码信号不会造成累积效应. 因此, 系统当前时刻的控制性能损失仅与当前时刻编码信号的大小有关. 最后, 将编码信号的协方差矩阵、检测率和检测阈值之间的关系表示成一个最优化问题, 给出了编码信号方差的计算方法. 仿真结果表明, 本文方法能有效地检测重放攻击的发生, 且系统控制性能的损失较小.  相似文献   

20.
Aniruddha  Joy   《Performance Evaluation》2005,59(4):337-366
In this paper, we investigate the performance of routing and rate allocation (RRA) algorithms in rate-based multi-class networks. On the arrival of a connection request, an RRA algorithm selects a route for the connection and allocates an appropriate rate on the route; failing this, it blocks the connection request. We measure the performance of an RRA algorithm in terms of its minimum weighted carried traffic. This performance criterion encompasses two widely used performance criteria, namely, weighted carried traffic and minimum carried traffic. We derive an upper bound on the minimum weighted carried traffic of any RRA algorithm. The bound can be computed by solving a linear program. Moreover, we show that the bound is achieved asymptotically, when the offered load and the link capacities are large, by a Partitioning RRA algorithm. Therefore the bound can be used as a performance benchmark for any RRA algorithm. We observe that the proposed Partitioning RRA algorithm, though asymptotically optimal, performs poorly at very low loads. We investigate the cause of this undesirable behaviour and obtain two improved asymptotically optimal RRA algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号