首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A neural network combined to a neural classifier is used in a real time forecasting of hourly maximum ozone in the centre of France, in an urban atmosphere. This neural model is based on the MultiLayer Perceptron (MLP) structure. The inputs of the statistical network are model output statistics of the weather predictions from the French National Weather Service. These predicted meteorological parameters are very easily available through an air quality network. The lead time used in this forecasting is (t + 24) h. Efforts are related to a regularisation method which is based on a Bayesian Information Criterion-like and to the determination of a confidence interval of forecasting. We offer a statistical validation between various statistical models and a deterministic chemistry-transport model. In this experiment, with the final neural network, the ozone peaks are fairly well predicted (in terms of global fit), with an Agreement Index = 92%, the Mean Absolute Error = the Root Mean Square Error = 15 μg m−3 and the Mean Bias Error = 5 μg m−3, where the European threshold of the hourly ozone is 180 μg m−3.To improve the performance of this exceedance forecasting, instead of the previous model, we use a neural classifier with a sigmoid function in the output layer. The output of the network ranges from [0,1] and can be interpreted as the probability of exceedance of the threshold. This model is compared to a classical logistic regression. With this neural classifier, the Success Index of forecasting is 78% whereas it is from 65% to 72% with the classical MLPs. During the validation phase, in the Summer of 2003, six ozone peaks above the threshold were detected. They actually were seven.Finally, the model called NEUROZONE is now used in real time. New data will be introduced in the training data each year, at the end of September. The network will be re-trained and new regression parameters estimated. So, one of the main difficulties in the training phase – namely the low frequency of ozone peaks above the threshold in this region – will be solved.  相似文献   

2.
Stock index forecasting is a hot issue in the financial arena. As the movements of stock indices are non-linear and subject to many internal and external factors, they pose a great challenge to researchers who try to predict them. In this paper, we select a radial basis function neural network (RBFNN) to train data and forecast the stock indices of the Shanghai Stock Exchange. We introduce the artificial fish swarm algorithm (AFSA) to optimize RBF. To increase forecasting efficiency, a K-means clustering algorithm is optimized by AFSA in the learning process of RBF. To verify the usefulness of our algorithm, we compared the forecasting results of RBF optimized by AFSA, genetic algorithms (GA) and particle swarm optimization (PSO), as well as forecasting results of ARIMA, BP and support vector machine (SVM). Our experiment indicates that RBF optimized by AFSA is an easy-to-use algorithm with considerable accuracy. Of all the combinations we tried in this paper, BIAS6 + MA5 + ASY4 was the optimum group with the least errors.  相似文献   

3.
This paper presents the findings of laboratory model testing of arched bridge constrictions in a rectangular open channel flume whose bed slope was fixed at zero. Four different types of arched bridge models, namely single opening semi-circular arch (SOSC), multiple opening semi-circular arch (MOSC), single opening elliptic arch (SOE), and multiple opening elliptic arch (MOE), were used in the testing program. The normal crossing (ϕ = 0), and five different skew angles (ϕ = 10°, 20°, 30°, 40°, and 50°) were tested for each type of arched bridge model. The main aim of this study is to develop a suitable model for estimating backwater through arched bridge constrictions with normal and skewed crossings. Therefore, different artificial neural network approaches, namely multi-layer perceptron (MLP), radial basis neural network (RBNN), generalized regression neural network (GRNN), and multi-linear and multi-nonlinear regression models, MLR and MNLR, respectively were used. Results of these experimental studies were compared with those obtained by the MLP, RBNN, GRNN, MLR, and MNLR approaches. The MLP produced more accurate predictions than those of the others.  相似文献   

4.
Diagnosis of reliability is an important topic for interconnection networks. Under the classical PMC model, Dahura and Masson [5] proposed a polynomial time algorithm with time complexity O(N2.5) to identify all faulty nodes in an N-node network. This paper addresses the fault diagnosis of so called bijective connection (BC) graphs including hypercubes, twisted cubes, locally twisted cubes, crossed cubes, and Möbius cubes. Utilizing a helpful structure proposed by Hsu and Tan [20] that was called the extending star by Lin et al. [24], and noting the existence of a structured Hamiltonian path within any BC graph, we present a fast diagnostic algorithm to identify all faulty nodes in O(N) time, where N = 2n, n ? 4, stands for the total number of nodes in the n-dimensional BC graph. As a result, this algorithm is significantly superior to Dahura–Masson’s algorithm when applied to BC graphs.  相似文献   

5.
The problem of analyzing and identifying regions of high discrimination between alcoholics and controls in a multichannel electroencephalogram (EEG) signal is modeled as a feature subset selection technique that can improve the recognition rate between both groups. Several studies have reported efficient detection of alcoholics by feature extraction and selection in gamma band visual event related potentials (ERP) of a multichannel EEG signal. However, in these studies the correlation between features and their class information is not considered for feature selection. This may lead to redundancy in the feature set and result in over fitting. Therefore in this study, a statistical feature selection technique based on Separability & Correlation analysis (SEPCOR) is proposed to select an optimal feature subset automatically that possesses minimum correlation between selected channels and maximum class separation. The optimal feature selection consists of a ranking method that assigns ranks to channels based on a variability measure (V-measure). From the ranked feature set of highly discriminative features, different subsets are automatically selected by heuristically applying a correlation threshold in steps from 0.02 to 0.1. These subsets are applied as input features to multilayer perceptron (MLP) neural network and k-nearest neighbor (k-NN) classifiers to discriminate alcoholic and control visual ERP. Prior to feature selection, spectral entropy features are computed in gamma sub band (30–55 Hz) interval of a 61-channel multi-trial EEG signal with multiple object recognition tasks. Independent Component Analysis (ICA) is performed on raw EEG data to remove eye blink, motion and muscle artifacts. Results indicate that both classifiers exhibit excellent classification accuracy of 99.6%, for a feature subset of 22 optimal channels with correlation threshold of 0.1. In terms of computation time, k-NN classifier outperforms multilayer perceptron-back propagation (MLP-BP) network with 7.93 s whereas MLP network takes 55 s to perform the recognition task with the same accuracy. Compared to feature section methods used in previous studies on the same EEG alcoholic database, there is a significant improvement in classification accuracy based on the proposed SEPCOR method.  相似文献   

6.
Uneven energy consumption is an inherent problem in wireless sensor networks characterized by multi-hop routing and many-to-one traffic pattern. Such unbalanced energy dissipation can significantly reduce network lifetime. In this paper, we study the problem of prolonging network lifetime in large-scale wireless sensor networks where a mobile sink gathers data periodically along the predefined path and each sensor node uploads its data to the mobile sink over a multi-hop communication path. By using greedy policy and dynamic programming, we propose a heuristic topology control algorithm with time complexity O(n(m + n log n)), where n and m are the number of nodes and edges in the network, respectively, and further discuss how to refine our algorithm to satisfy practical requirements such as distributed computing and transmission timeliness. Theoretical analysis and experimental results show that our algorithm is superior to several earlier algorithms for extending network lifetime.  相似文献   

7.
In this study, we propose a set of new algorithms to enhance the effectiveness of classification for 5-year survivability of breast cancer patients from a massive data set with imbalanced property. The proposed classifier algorithms are a combination of synthetic minority oversampling technique (SMOTE) and particle swarm optimization (PSO), while integrating some well known classifiers, such as logistic regression, C5 decision tree (C5) model, and 1-nearest neighbor search. To justify the effectiveness for this new set of classifiers, the g-mean and accuracy indices are used as performance indexes; moreover, the proposed classifiers are compared with previous literatures. Experimental results show that the hybrid algorithm of SMOTE + PSO + C5 is the best one for 5-year survivability of breast cancer patient classification among all algorithm combinations. We conclude that, implementing SMOTE in appropriate searching algorithms such as PSO and classifiers such as C5 can significantly improve the effectiveness of classification for massive imbalanced data sets.  相似文献   

8.
In this paper, a hybrid wireless sensor network (WSN) system is considered and implemented for the building energy management systems. Characteristics of the radios, which are based on the 2.4 GHz and 400 MHz bands, respectively, are analyzed for the building environments. For battery-operated portable sensors, narrow-bandwidth radios of the 400 MHz band are employed in a star connection between their parent nodes. Between the parent nodes, a mesh network is constructed for an efficient and fast data transmission based on the wide-bandwidth radios of the 2.4 GHz band. The hybrid WSN system is implemented and tested for a building environment and provides a reliable wireless communication link for gathering sensing data.  相似文献   

9.

Recent advancements in artificial neural networks (ANNs) motivated us to design a simple and faster spectrum prediction model termed the functional link artificial neural network (FLANN). The main objective of this paper is to gather realistic data to obtain utilization statistics for the industrial, scientific and medical band of 2.4–2.5 GHz. To present the occupancy statistics, we conducted measurement in indoors at the Swearingen Engineering Center, University of South Carolina. Further, we introduce different threshold-based spectrum prediction schemes to show the impact of threshold on occupancy, and propose a spectrum prediction algorithm based on FLANN to forecast a future spectrum usage profile from historical occupancy statistics. Spectrum occupancy is estimated and predicted by employing different ANN models including the Feed-forward multilayer perceptron (MLP), Recurrent MLP, Chebyshev FLANN and Trigonometric FLANN. It is observed that the absence of a hidden layer in FLANN makes it more efficient than the MLP model in predicting the occupancy faster and with less complexity. A set of illustrative results are presented to validate the performance of our proposed learning scheme.

  相似文献   

10.
MATLAB is a high-level matrix/array language with control flow statements and functions. MATLAB has several useful toolboxes to solve complex problems in various fields of science, such as geophysics. In geophysics, the inversion of 2D DC resistivity imaging data is complex due to its non-linearity, especially for high resistivity contrast regions. In this paper, we investigate the applicability of MATLAB to design, train and test a newly developed artificial neural network in inverting 2D DC resistivity imaging data. We used resilient propagation to train the network. The model used to produce synthetic data is a homogeneous medium of 100 Ω m resistivity with an embedded anomalous body of 1000 Ω m. The location of the anomalous body was moved to different positions within the homogeneous model mesh elements. The synthetic data were generated using a finite element forward modeling code by means of the RES2DMOD. The network was trained using 21 datasets and tested on another 16 synthetic datasets, as well as on real field data. In field data acquisition, the cable covers 120 m between the first and the last take-out, with a 3 m x-spacing. Three different electrode spacings were measured, which gave a dataset of 330 data points. The interpreted result shows that the trained network was able to invert 2D electrical resistivity imaging data obtained by a Wenner–Schlumberger configuration rapidly and accurately.  相似文献   

11.
This study addresses the problem of achieving higher-level multi-fault restoration in wavelength division multiplexing (WDM) networks with no wavelength conversion capability. A heuristic scheme, designated as the Directional Cycle Decomposition Algorithm (DCDA), is developed to maximize the number of tolerable faults utilizing only 100% redundancy in WDM networks without wavelength conversion. The redundancy is calculated as the required spare capacity over the given working capacity. The process of identifying the maximum number of tolerable faults is modeled as a constrained ring cover set problem. DCDA decomposes this problem into three steps and has an overall computational complexity of O(∣E∣∣V∣(C + 1) + E∣(C2 + 1)), where ∣V∣, ∣E∣ and C represent the number of vertices, the number of edges in the graph and the number of cycles in the cycle cover, respectively. The evaluation results reveal that the average number of tolerable simultaneous faults increases considerably under DCDA and the maximum number of tolerable simultaneous faults approaches the optimal solution provided by the brute-force method. DCDA facilitates an improved best-effort multi-fault restorability for a variety of planar and non-planar network topologies. An analytical method is proposed to facilitate a rapid estimation of the multi-fault restorability in a network using DCDA without the need for experimental evaluations. In addition, an approximation method is developed to obtain an estimate of the multi-fault restorability directly from DCDA without the requirement for a detailed knowledge of the network topology and restoration routes. The results show that the average errors in the approximated restorability values obtained using this method range from 0.12% (New Jersey) to 1.58% (Cost 239).  相似文献   

12.
《Computer Networks》2007,51(11):3172-3196
A search based heuristic for the optimisation of communication networks where traffic forecasts are uncertain and the problem is NP-complete is presented. While algorithms such as genetic algorithms (GA) and simulated annealing (SA) are often used for this class of problem, this work applies a combination of newer optimisation techniques specifically: fast local search (FLS) as an improved hill climbing method and guided local search (GLS) to allow escape from local minima. The GLS + FLS combination is compared with an optimised GA and SA approaches. It is found that in terms of implementation, the parameterisation of the GLS + FLS technique is significantly simpler than that for a GA and SA. Also, the self-regularisation feature of the GLS + FLS approach provides a distinctive advantage over the other techniques which require manual parameterisation. To compare numerical performance, the three techniques were tested over a number of network sets varying in size, number of switch circuit demands (network bandwidth demands) and levels of uncertainties on the switch circuit demands. The results show that the GLS + FLS outperforms the GA and SA techniques in terms of both solution quality and optimisation speed but even more importantly GLS + FLS has significantly reduced parameterisation time.  相似文献   

13.
In many large, distributed or mobile networks, broadcast algorithms are used to update information stored at the nodes. In this paper, we propose a new model of communication based on rendezvous and analyze a multi-hop distributed algorithm to broadcast a message in a synchronous setting. In the rendezvous model, two neighbors u and v can communicate if and only if u calls v and v calls u simultaneously. Thus nodes u and v obtain a rendezvous at a meeting point. If m is the number of meeting points, the network can be modeled by a graph of n vertices and m edges. At each round, every vertex chooses a random neighbor and there is a rendezvous if an edge has been chosen by its two extremities. Rendezvous enable an exchange of information between the two entities. We get sharp lower and upper bounds on the time complexity in terms of number of rounds to broadcast: we show that, for any graph, the expected number of rounds is between ln n and O (n2). For these two bounds, we prove that there exist some graphs for which the expected number of rounds is either O (ln (n)) or Ω (n2). For specific topologies, additional bounds are given.  相似文献   

14.
This paper presents an electromagnetic energy harvesting scheme by using a composite magnetoelectric (ME) transducer and a power management circuit. In the transducer, the vibrating wave induced from the magnetostrictive Terfenol-D plate in dynamic magnetic field is converged by using an ultrasonic horn. Consequently more vibrating energy can be converted into electricity by the piezoelectric element. A switching capacitor network for storing electricity is developed. The output of the transducer charges the storage capacitors in parallel until the voltage across the capacitors arrives at the threshold, and then the capacitors are automatically switched to being in series. More capacitors can be employed in the capacitor network to further raise the output voltage in discharging. For the weak magnetic field environment, an active magnetic generator and a magnetic coil antenna under ground are used for producing an ac magnetic field of 0.2–1 Oe at a distance of 25–50 m. In combination with the supply management circuit, the electromagnetic energy harvester with a rather weak power output (about 20 μW) under an ac magnetic field of 1 Oe can supply power for wireless sensor nodes with power consumption of 75 mW at a duration of 620 ms.  相似文献   

15.
Given a K-nodes cluster (subnetwork) in an N-nodes network, the (Boolean) fault tree function for the connectedness of at least K  1 intact nodes out of the above K nodes is determined. This function is primarily formulated as a function of mutually dependent variables, viz., the top event variables of various point-to-point connectivity (so-called s,t-) problems. The minimization of the number of s,t-problems involved will be a major concern. The main contribution of this paper is seen in the conceptual clarity of the approach, whereas the computational complexity for real life problems still needs improvements. (The analysis of the fault trees found, resulting in system unavailability, MTTF, etc., is not pursued here.) A preliminary version of this paper was reviewed for and presented at EWDC-7 (European Workshop on Dependable Computing 1995; without proceedings).  相似文献   

16.
Pure polyaniline (PAN) film, polyaniline and acetic acid (AA) mixed film, as well as PAN and polystyrenesulfonic acid (PSSA) composite film with various number of layers were prepared by Langmuir–Blodgett (LB) and self-assembly (SA) techniques. These ultra-thin films were characterized by ultraviolet–visible (UV–VIS) spectroscopy and ellipsometry. It is found that the thickness of PAN-based ultra-thin films increases linearly with the increase of the number of film layers. The gas-sensitivity of these ultra-thin films with various layers to NO2 was studied. It is found that pure polyaniline films prepared by LB technique had good sensitivity to NO2, while SA films exhibited faster recovery property. The response time to NO2 and the relative change of resistance of ultra-thin films increased with the increase of the number of film layers. The response time of three-layer PAN film prepared by LB technique to 20 ppm NO2 was about 10 s, two-layer SA film was about 8 s. The mechanism of sensitivity to NO2 of PAN-based ultra-thin films was also discussed.  相似文献   

17.
Cloud computing infrastructures provide vast processing power and host a diverse set of computing workloads, ranging from service-oriented deployments to high-performance computing (HPC) applications. As HPC applications scale to a large number of VMs, providing near-native network I/O performance to each peer VM is an important challenge. In this paper we present Xen2MX, a paravirtual interconnection framework over generic Ethernet, binary compatible with Myrinet/MX and wire compatible with MXoE. Xen2MX combines the zero-copy characteristics of Open-MX with Xen's memory sharing techniques. Experimental evaluation of our prototype implementation shows that Xen2MX is able to achieve nearly the same raw performance as Open-MX running in a non-virtualized environment. On the latency front, Xen2MX performs as close as 96% to the case where virtualization layers are not present. Regarding throughput, Xen2MX saturates a 10 Gbps link, achieving 1159 MB/s, compared to 1192 MB/s of the non-virtualized case. Scales efficiently with the number of VMs, saturating the link for even smaller messages when 40 single-core VMs put pressure on the network adapters.  相似文献   

18.
Strong ties play a crucial role in transmitting sensitive information in social networks, especially in the criminal justice domain. However, large social networks containing many entities and relations may also contain a large amount of noisy data. Thus, identifying strong ties accurately and efficiently within such a network poses a major challenge. This paper presents a novel approach to address the noise problem. We transform the original social network graph into a relation context-oriented edge-dual graph by adding new nodes to the original graph based on abstracting the relation contexts from the original edges (relations). Then we compute the local k-connectivity between two given nodes. This produces a measure of the robustness of the relations. To evaluate the correctness and the efficiency of this measure, we conducted an implementation of a system which integrated a total of 450 GB of data from several different data sources. The discovered social network contains 4,906,460 nodes (individuals) and 211,403,212 edges. Our experiments are based on 700 co-offenders involved in robbery crimes. The experimental results show that most strong ties are formed with k ? 2.  相似文献   

19.
Human activities are inherently translation invariant and hierarchical. Human activity recognition (HAR), a field that has garnered a lot of attention in recent years due to its high demand in various application domains, makes use of time-series sensor data to infer activities. In this paper, a deep convolutional neural network (convnet) is proposed to perform efficient and effective HAR using smartphone sensors by exploiting the inherent characteristics of activities and 1D time-series signals, at the same time providing a way to automatically and data-adaptively extract robust features from raw data. Experiments show that convnets indeed derive relevant and more complex features with every additional layer, although difference of feature complexity level decreases with every additional layer. A wider time span of temporal local correlation can be exploited (1 × 9–1 × 14) and a low pooling size (1 × 2–1 × 3) is shown to be beneficial. Convnets also achieved an almost perfect classification on moving activities, especially very similar ones which were previously perceived to be very difficult to classify. Lastly, convnets outperform other state-of-the-art data mining techniques in HAR for the benchmark dataset collected from 30 volunteer subjects, achieving an overall performance of 94.79% on the test set with raw sensor data, and 95.75% with additional information of temporal fast Fourier transform of the HAR data set.  相似文献   

20.
《Computer Networks》2008,52(3):514-530
The use of deadline-based scheduling in support of real-time delivery of application data units (ADUs) in a packet-switched network is investigated. Of interest is priority scheduling where a packet with a smaller ratio of T/H (time until delivery deadline over number of hops remaining) is given a higher priority. We refer to this scheduling algorithm as the T/H algorithm. T/H has time complexity of O(log N) for a backlog of N packets and was shown to achieve good performance in terms of the percentage of ADUs that are delivered on-time. We develop a new and efficient algorithm, called T/H  p, that has O(1) time complexity. The performance difference of T/H, T/H  p and FCFS are evaluated by simulation. Implementations of T/H and T/H  p in high-speed routers are also discussed. We show through simulation that T/H  p is superior to FCFS but not as good as T/H. In view of the constant time complexity, T/H  p is a good candidate for high-speed routers when both performance and implementation cost are taken into consideration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号