首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
根据负荷曲线的特性对负荷进行分类,用简单的模型描述和预测,对用户的月用电数据的预测结果表明,新方法能够得到更准确的结果,较准确地反映负荷变化情况.  相似文献   

2.
日电力负荷预测是电力市场运营的基本内容。当前大多数预测方法对不同时段往往采用相同的预测模型和算法,而较少考虑不同时段的负荷组成及特征变化。提出了一种新的分时段多模型组合预测方法。根据负荷组成和特征变化,将日96点负荷分为多个时间段,每个时段内采用多元线性回归、灰色预测、支持向量机和神经网络预测等子模型加权实现多模型组合预测。通过对华东某地市电网日负荷96点曲线的预测结果显示,该方法效果较好,日预测均方根误差在1.78%以内,能较好地满足实际电力系统的负荷预测要求。  相似文献   

3.
针对电网储能系统存在无法自适应接入,资源分配不均衡等问题,本文提出基于负荷模型电网储能系统自适应接入模式研究。利用基于灰色关联度分析的支持向量机预测方法,预测接入点用电负荷;在此基础上,建立分布式接入点优化配置模型,采用粒子群算法求解配置模型,根据求解结果实现电网储能系统自适应接入。实验结果表明:与接入模式应用前相比,采用本方法接入后接入点负荷承载力变化曲线与储能系统放电曲线大致吻合,且资源分配均衡,证明接入模式具有效性和可行性。  相似文献   

4.
针对电力负荷进行准确预测对于电力系统的稳定运行具有重要的意义.利用传统的数据子空间算法进行电力负荷预测的过程中,由于没有考虑电力系统的非线性和时变性,导致预测精确度较低.为此,提出一种基于改进数据子空间算法的电力负荷预测方法,在电力负荷预测子空间方程式中加入反馈因子,在电力负荷历史数据中加入遗忘因子,利用粒子群算法对两种反馈因子和遗忘因子进行寻优,并将寻优结果带入到改进的电力负荷子空间预测模型中进行计算,从而获得准确的预测结果.实验结果表明,利用改进算法进行电力负荷预测,能够提高预测精度,效果令人满意.  相似文献   

5.
为了解决日益严峻的社会环境和资源日趋枯竭的的严峻形势,让不同能源结构能够得到优化配置,本文采用了改进的自适应遗传算法来对能源进行预测分析,借助计量自动化系统提供的大量电力负荷数据,基于用户群体分析与识别,改进自适应遗传算法等大数据技术对负荷进行预测,并对不同行业和部门,不同能源结构进行深入的分析与研究与探讨。对比传统的几种预测算法,得出改进的自适应遗传算法具有更加准确的预测能力,研究结果表明,提前做好相关能源的预测,对能源结构进行过综合的规划是很有必要的,可以引领能源模式走入一种全新的模式,开拓能源互联网新时代。为能源结构的转型升级做好必要的工作。能源的综合规划能缓解现在面临的能源危机和环境污染等严重的问题。  相似文献   

6.
对电网供电系统短期电力负荷预测模型进行优化,能提升预测结果的准确性和鲁棒性.虽然现有预测模型可以满足预测速度的要求,但预测结果的精确性和稳定性却无法保证.为了得到更加准确和稳定的预测结果,提出了细菌觅食算法优化极限学习机预测模型.首先在电力负荷样本数据中形成训练样本和预测样本集,利用细菌觅食优化算法对极限学习机预测模型中的不确定参数进行优化,然后利用改进后的模型进行电力负荷预测.新模型的优化仿真结果显示,利用细菌觅食算法优化极限学习机预测模型的预测精度和稳定性均优于传统预测模型的预测结果,该算法具有很好地实用性.  相似文献   

7.
基于历史数据和深度学习的负荷预测已广泛应用于以电能为中心的综合能源系统中以提高预测精度,然而,当区域中出现新用户时,其历史负荷数据往往极少,此时,深度学习难以适用.针对此,本文提出基于负荷特征提取和迁移学习的预测机制.首先,依据源域用户历史负荷数据,融合聚类算法和门控循环单元网络构建源域数据的特征提取和分类模型;然后,...  相似文献   

8.
我国建筑能耗约占社会总能耗的30%,其中集中式暖通空调系统能耗约占一半以上.为提高节能效率,本文提出基于负荷预测的空调冷冻站系统神经网络预测控制策略.本文采用神经网络作为优化反馈控制器,将满足负荷需求和系统能效比需求作为优化目标,将变分法和随机梯度下降法相结合,对神经网络权值进行滚动优化,既能解决传统变分法由开环控制引发的对随机干扰和不确定性敏感的问题,又可避免基于动态规划的非线性优化算法的"维数灾"问题.本文以北京某国企科研楼的空调系统为研究对象,实验结果表明,本文所提出的神经网络预测控制策略与PID控制算法相比,系统总能耗节省约8.57%,并且在控制过程中能够克服各种变化和不确定性因素的影响,具有更好的动态和稳态性能,且该算法占用存储空间适中、计算量小,易于工程实现.  相似文献   

9.
基于ANN的用户日负荷曲线的预测研究   总被引:1,自引:0,他引:1  
本文以一典型大电力用户--钢铁企业为研究对象,利用无线负荷监控系统的负荷数据,采用人工神经网络ANN中的BP网络理论,构建了24点输出和单点输出这样两种BP神经网络,结合不同的平移时间窗口技术,比较了不同预测模型的性能,实现了对未来一天整点时刻的负荷预测,通过与实际负荷值比较,发现采用BP算法进行大用户的日负荷曲线的预测,结果令人满意.  相似文献   

10.
电力系统负荷受多种因素影响,因此负荷曲线复杂,导致短期负荷预测精度难以满足要求。根据负荷特性把日负荷分为多种负荷模式,依据日负荷曲线变化趋势合理划分时段,对各时段采用神经网络方法进行负荷预测,进而完成日负荷预测。河源电网日负荷预测结果表明该方法的预测精度有较大的提高,满足短期负荷预测要求。  相似文献   

11.
随着过去几十年互联网服务的指数增长,各大网站的访问量急剧上升。海量的用户请求使得热门网站的网络请求率可能在几秒钟内大规模增加。一旦服务器承受不住这样的高并发请求,由此带来的网络拥塞和延迟会极大地影响用户体验。负载均衡是高可用网络基础架构的关键组件,通过在后端引入一个负载均衡器,将工作负载分布到多个服务器来缓解海量并发请求对服务器造成的巨大压力,提高后端服务器和数据库的性能以及可靠性。而Nginx作为一款高性能的HTTP和反向代理服务器,正越来越多地应用到实践中。文中将分析Nginx服务器负载均衡的体系架构,研究默认的加权轮询算法,并提出一种改进后的动态负载均衡算法,实时收集负载信息,重新计算并分配权值。通过实验测试,对比不同算法下的负载均衡性能,改进后的算法能有效提高服务器集群的性能。  相似文献   

12.
基于CURE算法的网络用户行为分析   总被引:1,自引:0,他引:1  
从安全的角度分析网络用户行为,建立了一个基于Netflow统计的用户行为向量数据模型,提出了一个网络用户行为的分析框架,建立了一个分析流程。针对存储网络用户行为的大型数据库选用了一个合适的聚类算法即CURE算法,并对CURE算法进行了基于实际应用的改进。实验结果表明,改进后的CURE算法不仅能很好地聚类,而且能区分出正常行为和异常行为,通过危害行为评价体系分析,聚类得到的异常行为是危害行为的检测率非常高。对于实时网络上的增量数据,文中也给出了增量挖掘的算法,符合网络实时分析的需要。  相似文献   

13.
Modern mobile devices are increasingly capable of simultaneously connecting to multiple access networks with different characteristics. Restricted coverage combined with user mobility will vary the availability of networks for a mobile device. Most proposed solutions for such an environment are reactive in nature, such as performing a vertical handover to the network that offers the highest bandwidth. But the cost of the handover may not be justified if that network is only available for a short time. Knowledge of future network availability and their capabilities are the basis for proactive schemes which will improve network selection and utilization. We have previously proposed a prediction model that can use any available context such as GSM Location Area, WLAN presence or even whether the power cable is plugged in, to predict network availability.As it may not be possible to sense all of the context variables that influence future network availability, in this paper we introduce a generic, new model incorporating a hidden variable to account for this. Specifically, we propose a Dynamic Bayesian Network based context prediction model to predict network availability. Predictions performed for WLAN availability with the real user data collected in our experiments show 20% or more improvement compared to both of our earlier proposals of order 1 and 2 semi-Markov models.  相似文献   

14.
《Computer Networks》2007,51(10):2645-2676
As ICT services are becoming more ubiquitous and mobile and access technologies grow to be more heterogeneous and complex, we are witnessing the increasing importance of two related needs: (i) users need to be able to configure and personalize their services with minimal effort; (ii) operators desire to engineer and manage their networks easily and efficiently, limiting human agency as far as possible. We propose a possible solution to reach these goals. Our vision, developed in the so-called Simplicity project, is based on a personalization device, which, together with a brokerage framework, offers transparent service configuration and runtime adaptation, according to user preferences and computing/networking context conditions. The capabilities of this framework can be exploited: (i) on the user side, to personalize services, to improve the portability of services over heterogeneous terminals and devices, to adapt services to available networking and terminal technologies; (ii) on the network side, to give operators more powerful tools to define new solutions for distributed, technology-independent, self-organizing, autonomic networking systems. Such systems could be designed so as to be able to react autonomously to changing contexts and environments.In this paper, we first describe the main aspects of the Simplicity solution. We then want to show that our approach is indeed viable. To prove this point, we present an application which exploits the capabilities of the Simplicity system: a mechanism to drive mobile users towards the most appropriate point of access to the network, taking into account both user preferences and network context. We use simulation to evaluate the performance of this procedure in a specific case study, where the aim is to balance the load in an 802.11b access network scenario. The numerical results show the effectiveness of the proposed procedure when compared to a legacy scenario and to another solution from literature.To give ample proof of the feasibility of our solution, we also designed and implemented a real prototype. The prototype enables not only the load to be balanced among different 802.11 access points, but also network and application services to be differentiated as a function of user profiles and network load. The main aspects of this prototype are presented in this paper.  相似文献   

15.
We study the problem of optimal integrated dynamic pricing and radio resource management, in terms of resource allocation and call admission control, in a WCDMA network. In such interference-limited network, one's resource usage also degrades the utility of others. A new parameter noise rise factor, which indicates the amount of interference generated by a call, is suggested as a basis for setting price to make users accountable for the congestion externality of their usage. The methods of dynamic programming (DP) are unsuitable for problems with large state spaces due to the associated ldquocurse of dimensionality.rdquo To overcome this, we solve the problem using a simulation-based neurodynamic programming (NDP) method with an action-dependent approximation architecture. Our results show that the proposed optimal policy provides significant average reward and congestion improvement over conventional policies that charge users based on their load factor.  相似文献   

16.
Cloud computing is a high network infrastructure where users, owners, third users, authorized users, and customers can access and store their information quickly. The use of cloud computing has realized the rapid increase of information in every field and the need for a centralized location for processing efficiently. This cloud is nowadays highly affected by internal threats of the user. Sensitive applications such as banking, hospital, and business are more likely affected by real user threats. An intruder is presented as a user and set as a member of the network. After becoming an insider in the network, they will try to attack or steal sensitive data during information sharing or conversation. The major issue in today's technological development is identifying the insider threat in the cloud network. When data are lost, compromising cloud users is difficult. Privacy and security are not ensured, and then, the usage of the cloud is not trusted. Several solutions are available for the external security of the cloud network. However, insider or internal threats need to be addressed. In this research work, we focus on a solution for identifying an insider attack using the artificial intelligence technique. An insider attack is possible by using nodes of weak users’ systems. They will log in using a weak user id, connect to a network, and pretend to be a trusted node. Then, they can easily attack and hack information as an insider, and identifying them is very difficult. These types of attacks need intelligent solutions. A machine learning approach is widely used for security issues. To date, the existing lags can classify the attackers accurately. This information hijacking process is very absurd, which motivates young researchers to provide a solution for internal threats. In our proposed work, we track the attackers using a user interaction behavior pattern and deep learning technique. The usage of mouse movements and clicks and keystrokes of the real user is stored in a database. The deep belief neural network is designed using a restricted Boltzmann machine (RBM) so that the layer of RBM communicates with the previous and subsequent layers. The result is evaluated using a Cooja simulator based on the cloud environment. The accuracy and F-measure are highly improved compared with when using the existing long short-term memory and support vector machine.  相似文献   

17.
Macrocell和Femtocell的两层蜂窝网络中的用户位置在空间和时间上具有很大的随机性,给资源分配和干扰管理带来许多挑战。为了适应这种随机性的资源分配,提高同频组网中下行异构蜂窝网络的小区边缘用户的通信速率,更好地实现负载均衡效果,提出了一种基于改进蝙蝠算法来实时动态设置Femtocell小区范围扩展偏置值(CRE)的方案,来缓解宏基站高热点负载压力,提高网络容量,从而用户合理选择接入不同基站,使功率资源得到合理利用,达到负载均衡的目的。仿真结果表明,与现有方案相比,该方案在保证Macrocell通信性能的情况下,提高了小区边缘数据速率及能效,实现了更好的负载均分效果。  相似文献   

18.
本文以3GPP提出的3GTS25.213协议为依据,提供了一种对WCDMA物理层调制与解调的仿真方法。并通过实例,给出了WCDMA系统对信源消息的处理流程。为WCDMA信号解调误码率的分析以及干扰分析等研究工作,提供了一种较为直观的验证方法。  相似文献   

19.
《Computer Networks》2007,51(12):3380-3391
The ability to reserve network bandwidth is a critical factor for the success of high-performance grid applications. Reservation of lightpaths in dynamically switched optical networks facilitates guaranteed bandwidth. However, reservation of bandwidth can often lead to bandwidth fragmentation which significantly reduces system utilization and increases the blocking probability of the network. An interesting approach to mitigating this problem is to induce quasi-flexibility in the user requests. A smart scheduling strategy can then exploit this quasi-flexibility and optimize bandwidth utilization. However, there has to be an incentive for flexibility from the user’s perspective as well. In this paper, we explore how the network service provider (NSP) can influence user flexibility by dynamically engineering pricing incentives. Ultimately, user flexibility will lead to efficient network utilization, reduce the price for the users, and increase the revenue for the NSP.  相似文献   

20.
手写汉字识别是手写汉字输入的基础。目前智能设备中的手写汉字输入法无法根据用户的汉字书写习惯,动态调整识别模型以提升手写汉字的正确识别率。通过对最新深度学习算法及训练模型的研究,提出了一种基于用户手写汉字样本实时采集的个性化手写汉字输入系统的设计方法。该方法将采集用户的手写汉字作为增量样本,通过对服务器端训练生成的手写汉字识别模型的再次训练,使识别模型能够更好地适应该用户的书写习惯,提升手写汉字输入系统的识别率。最后,在该理论方法的基础上,结合新设计的深度残差网络,进行了手写汉字识别的对比实验。实验结果显示,通过引入实时采集样本的再次训练,手写汉字识别模型的识别率有较大幅度的提升,能够更有效的满足用户在智能设备端对手写汉字输入系统的使用需求。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号