共查询到20条相似文献,搜索用时 210 毫秒
1.
2.
典型气象年和典型代表年的选择及其对建筑能耗的影响 总被引:6,自引:0,他引:6
介绍了典型气象年和典型代表年的选择原理和几种常见的选择方法。不同的方法考虑了不同气象参数的加权因子和气象数据的连续性。介绍了将太阳辐射总量分为太阳直射辐射量与太阳散射辐射量的应用模型,并依据香港的气象数据,分别计算选出了香港的典型气象年与典型代表年。为了验证不同方法计算出的典型气象年与典型代表年对研究对象、系统的影响,作了一个实例建筑物能耗动态模拟。结果表明,不同典型气象年对模拟结果的影响偏差较小,而典型代表年的影响较大;选择合适方法计算的典型气象年对保证模拟评估结果的正确性具有重要意义。 相似文献
3.
通过整理和分析沈阳市某办公建筑水源热泵系统耗电量及气象因素数据资料,研究水源热泵系统耗电量变化特征及其随气象因素的变化规律。主要采用相关性分析法,定量分析了水源热泵系统耗电量与各气象因素之间的相关性系数。结果表明水源热泵系统耗电量主要在采暖季,水源热泵系统耗电量与室外干球温度、降水量、相对湿度及日照时数均呈现负相关性。室外干球温度和降水量对办公建筑水源热泵系统耗电量影响显著,而相对湿度、日照时数对办公建筑水源热泵系统耗电量影响不显著。 相似文献
4.
5.
6.
太阳辐射直散分离模型比较研究——以北京地区为例 总被引:1,自引:0,他引:1
在建筑能耗模拟与太阳能建筑系统设计中,逐时的太阳直射和散射气象数据是最重要的基本参数。由于中国辐射观测数据的缺失,逐时直射和散射数据很难获得。很多学者对此进行了研究,提出了数十种直散分离模型。采用北京地区2009年—2011年3年太阳总辐射和散射实测数据,选取Erbs模型、Orglill模型、清华大学随机气象模型、宇田川光弘模型、张晴原模型5个代表性的直散分离模型进行计算验证,分析比较了实测数据和计算数据之间的相关系数R、均方根误差RMSE和相对误差RE,得出晴空指数Kt可以作为最主要的影响因子,Erbs模型预测散射的准确率最高,其次为张晴原模型和Orglill and Hollands模型。 相似文献
7.
8.
《Planning》2019,(3):401-407
对北京市周边8个点多个压力高度的温度、湿度和风速数据,以及北京市PM2. 5污染数据进行了分析和归一化处理,建立了反向传播神经网络(back propagation,BP)、卷积神经网络(convolutional neural network,CNN)和长短期记忆模型(long short-term memory,LSTM)对上述气象数据和污染数据进行训练,训练结果表明:反向传播神经网络模型和卷积神经网络模型对未来1 h的PM2. 5污染等级的预测准确率较低,而长短期记忆模型的准确率较高.使用长短期记忆模型预测未来1 h的PM2. 5污染值与实际值十分接近,表明北京市的PM2. 5污染与其周边地区的气象条件关系密切.通过利用长短期记忆模型对不同压力高度的气象数据进行训练和对比,得出在利用气象数据预测污染时,仅使用近地面气象数据比使用多个高度上的气象数据更加准确. 相似文献
9.
夏热冬冷地区窗墙比对办公建筑能耗的影响 总被引:1,自引:0,他引:1
本文根据南京地区典型年逐时气象参数,基于建筑墙体动态传热反应系数法,利用MATLAB编程分析了建筑围护结构动态热负荷变化,研究了在南北朝向和不同外窗类型的条件下,窗墙比为0.2~0.7对全年建筑供暖、空调能耗及全年总耗电量的影响规律.结果表明,北向窗墙比的增加会导致全年供暖、空调总能耗加大;对于PVC塑料中空窗,随南向窗墙比的增大,全年总耗电量的增加量较小,建议在夏热冬冷的南京地区使用. 相似文献
10.
11.
Mohammad Zakwan 《Water and Environment Journal》2019,33(4):620-632
Variability in infiltration characteristics of soils creates need for selection of appropriate infiltration model. Recently, a novel infiltration model was proposed and reported to perform excellently in estimating infiltration rate of soils of Kurukshetra, India, however, this model need to be tested for its reliability at global level. In this regard, the present study analyses the reliability of the novel model based on infiltration database comprising of 16 data sets from different parts of the world to arrive at some generalized results on the reliability of novel model as compared to commonly used infiltration models. Comparative analysis reveals that out of the 16 data sets for 9 sites (57%). Horton model was found to be best model while novel model was found as best‐fit model in five cases (31%). Based on the present study and earlier investigations, it may be inferred that novel model could be a useful model to estimate infiltration rate in loams. 相似文献
12.
Pleijel H Pihl Karlsson G Binsell Gerdin E 《The Science of the total environment》2004,332(1-3):261-264
Measurements of nitrogen dioxide (NO2) concentrations, performed with passive diffusion samplers in gradients from a highway in South-west Sweden, were used to test the assumption that the NO2 concentration contributed by the highway varies with the logarithm of the distance from the highway. The five data sets used corroborated the hypothesis, and it was shown that all data could be accommodated to a common relationship with high correlation (R2=0.95) using the concentration of 10 m away from the highroad as the reference. The data were also well in accordance with a recently published study from Canada, although the slope of the relationship between the NO2 concentration contributed by a highway and the logarithm of the distance was somewhat stronger for the Swedish data compared to the Canadian. The regression slope is likely to be sensitive to wind speed, atmospheric stability, surface roughness and the background ozone concentrations of the area. 相似文献
13.
采用孔压静力触探(CPTU)测试资料进行土的工程分类时,在土成分和力学性状等相关性方面会产生很多不确定性。根据已有的CPTU土分类系统划分土类时,这种不确定性经常会导致不同土类的重叠。聚类分析是一个对相似数据进行分组归类的数学统计方法,基于CPTU测试资料将其应用于地质土层的划分。根据5条高速公路场地的现场CPTU试验和钻孔柱状图资料,阐述了已有聚类分析方法在划分土层界面的应用。结果表明:聚类分析方法不仅能够描绘出土层中主要的变化,而且能够探测到薄土夹层。因此,基于聚类分析理论的CPTU方法可以可靠有效地提供地质土层的初步划分。 相似文献
14.
G. Nicolaou R. Pietra E. Sabbioni R.M. Parr 《The Science of the total environment》1989,80(2-3):167-174
Box plots are used in the visual representation of large data sets and in exploratory data analysis. They display batches of data with five values being used to describe the data set. These are the median, the upper and lower extremes of the range of values and the 75 and 25 percentiles. A notch about the median, e.g. at the 95 percent level of significance, can be incorporated in the display allowing the difference between the medians of different sets to be established. The method, although not recently established, has so far found little application in the analytical field. Hence, in an effort to strengthen its applicability, the features and capabilities of box plots, in terms of data reporting and insight into the data set, are here described through elemental composition studies in relation to environmental and occupational health. 相似文献
15.
R.E. Hammah J.H. Curran 《International Journal of Rock Mechanics and Mining Sciences》1998,35(7):889-905
The task of identifying and isolating joint sets or subgroups of discontinuities existing in data collected from joint surveys is not a trivial issue and is fundamental to rock engineering design. Traditional methods for carrying out the task have been mostly based on the analysis of plots of the discontinuity orientations or their clustering. However, they suffer from their inability to incorporate the extra data columns collected and also lack in objectivity. This paper proposes a fuzzy K-means algorithm, which has the capability of using the extra information on discontinuities, as well as their orientations in exploratory data analysis. Apart from taking into account the hybrid nature of the information gathered on joints (orientation and non-orientation information), the new algorithm also makes no a priori assumptions as to the number of joint sets available. It provides validity indices (performance measures) for assessing the optimal delineation of the data set into fracture subgroups. The proposed algorithm was tested on two simulated data sets in the paper. In the first example, the data set demanded the analysis of discontinuity orientation only, and the algorithm identified both the number of joint sets present and their proper partitioning. In the second example, additional information on joint roughness was necessary to recover the true structure of the data set. The algorithm was able to converge on the correct solution when the extra information was included in the analysis. 相似文献
16.
17.
Arthur Getis 《The Annals of Regional Science》1999,33(2):145-150
The rapid advances in computer technology give regional scientists the opportunity to employ data sets that are much larger
than those currently in use. Increasingly, metadata are being created in order to give researchers a better understanding
of the qualities of data sets. Nonetheless, a number of problems associated with large data sets, outlined in this paper,
stand in the way of proper scientific use of the data. Regional scientists should not only take advantage of the new data
but they should be prepared to handle the problems associated with them. 相似文献
18.
19.