We studied the dynamics of large networks of spiking neurons with conductance-based (nonlinear) synapses and compared them to networks with current-based (linear) synapses. For systems with sparse and inhibition-dominated recurrent connectivity, weak external inputs induced asynchronous irregular firing at low rates. Membrane potentials fluctuated a few millivolts below threshold, and membrane conductances were increased by a factor 2 to 5 with respect to the resting state. This combination of parameters characterizes the ongoing spiking activity typically recorded in the cortex in vivo. Many aspects of the asynchronous irregular state in conductance-based networks could be sufficiently well characterized with a simple numerical mean field approach. In particular, it correctly predicted an intriguing property of conductance-based networks that does not appear to be shared by current-based models: they exhibit states of low-rate asynchronous irregular activity that persist for some period of time even in the absence of external inputs and without cortical pacemakers. Simulations of larger networks (up to 350,000 neurons) demonstrated that the survival time of self-sustained activity increases exponentially with network size. 相似文献
Large scale grid computing systems may provide multitudinous services, from different providers, whose quality of service
will vary. Moreover, services are deployed and undeployed in the grid with no central coordination. Thus, to find out the
most suitable service to fulfill their needs, or to find the most suitable set of resources on which to deploy their services,
grid users must resort to a Grid Information Service (GIS). This service allows users to submit rich queries that are normally
composed of multiple attributes and range operations. The ability to efficiently execute complex searches in a scalable and
reliable way is a key challenge for current GIS designs. Scalability issues are normally dealt with by using peer-to-peer
technologies. However, the more reliable peer-to-peer approaches do not cater for rich queries in a natural way. On the other
hand, approaches that can easily support these rich queries are less robust in the presence of failures. In this paper we
present the design of NodeWiz, a GIS that allows multi-attribute range queries to be performed efficiently in a distributed
manner, while maintaining load balance and resilience to failures. 相似文献
This paper settles a question about prudent vacillatory identification of languages. Consider a scenario in which an algorithmic deviceM is presented with all and only the elements of a languageL, andM conjectures a sequence, possibly infinite, of grammars. Three different criteria for success ofM onL have been extensively investigated in formal language learning theory. IfM converges to a single correct grammar forL, then the criterion of success is Gold's seminal notion ofTxtEx-identification. IfM converges to a finite number of correct grammars forL, then the criterion of success is calledTxtFex-identification. Further, ifM, after a finite number of incorrect guesses, outputs only correct grammars forL (possibly infinitely many distinct grammars), then the criterion of success is known asTxtBc-identification. A learning machine is said to beprudent according to a particular criterion of success just in case the only grammars it ever conjectures are for languages that it can learn according to that criterion. This notion was introduced by Osherson, Stob, and Weinstein with a view to investigating certain proposals for characterizing natural languages in linguistic theory. Fulk showed that prudence does not restrictTxtEx-identification, and later Kurtz and Royer showed that prudence does not restrictTxtBc-identification. This paper shows that prudence does not restrictTxtFex-identification. 相似文献
Wireless Personal Communications - This paper proposes and analyses the power allocation coefficient normalization for successive interference cancellation in power-domain non-orthogonal multiple... 相似文献
Wireless communication networks have much data to sense, process, and transmit. It tends to develop a security mechanism to care for these needs for such modern-day systems. An intrusion detection system (IDS) is a solution that has recently gained the researcher’s attention with the application of deep learning techniques in IDS. In this paper, we propose an IDS model that uses a deep learning algorithm, conditional generative adversarial network (CGAN), enabling unsupervised learning in the model and adding an eXtreme gradient boosting (XGBoost) classifier for faster comparison and visualization of results. The proposed method can reduce the need to deploy extra sensors to generate fake data to fool the intruder 1.2–2.6%, as the proposed system generates this fake data. The parameters were selected to give optimal results to our model without significant alterations and complications. The model learns from its dataset samples with the multiple-layer network for a refined training process. We aimed that the proposed model could improve the accuracy and thus, decrease the false detection rate and obtain good precision in the cases of both the datasets, NSL-KDD and the CICIDS2017, which can be used as a detector for cyber intrusions. The false alarm rate of the proposed model decreases by about 1.827%.
Wireless Personal Communications - Cloud is an environment where the resources are outsourced as service to the cloud consumers based on their demand. The cloud providers follows pay as you go... 相似文献
Wireless Personal Communications - This work aims to implement a clustering scheme to separate vehicles into a cluster that is based on various parameters, such as the total number of relay nodes,... 相似文献
Detection of the selfish node in a delay tolerant network (DTN) can sharply reduce the loss incurred in a network. The algorithm's current pedigree mainly focuses on the rely on nodes, records, and delivery performance. The community structure and social aspects have been overlooked. Analysis of individual and social tie preferences results in an extensive detection time and increases communication overhead. In this article, a heterogeneous DTN topology with high-power stationary nodes and mobile nodes on Manhattan's accurate map is designed. With the increasing complexity of social ties and the diversified nature of topology structure, there need for a method that can effectively capture the essence within the speculated time. In this article, a novel deep autoencoder-based nonnegative matrix factorization (DANMF) is proposed for DTN topology. The topology of social ties projected onto low-dimensional space leads to effective cluster formation. DANMF automatically learns an appropriate nonlinear mapping function by utilizing the features of data. Also, the inherent structure of the deep autoencoder is nonlinear and has strong generalization. The membership matrices extracted from the DANMF are used to design the weighted cumulative social tie that eventually, along with the residual energy, is used to detect the network's selfish node. The testing of the designed model is carried out on the real dataset of MIT reality. The proficiency of the developed algorithm has been well tested and proved at every step. The methods employed for social tie extraction are NMF and DANMF. The methodology is rigorously experimented on various scenarios and has improved around 80% in the worst-case scenario of 40% nodes turning selfish. A comprehensive comparison is made with the other existing state-of-the-art methods which are also incentive-based approaches. The developed method has outperformed and has shown the supremacy of the current methods to capture the latent, hidden structure of the social tie.
Cloud computing is an Information Technology deployment model established on virtualization. Task scheduling states the set of rules for task allocations to an exact virtual machine in the cloud computing environment. However, task scheduling challenges such as optimal task scheduling performance solutions, are addressed in cloud computing. First, the cloud computing performance due to task scheduling is improved by proposing a Dynamic Weighted Round-Robin algorithm. This recommended DWRR algorithm improves the task scheduling performance by considering resource competencies, task priorities, and length. Second, a heuristic algorithm called Hybrid Particle Swarm Parallel Ant Colony Optimization is proposed to solve the task execution delay problem in DWRR based task scheduling. In the end, a fuzzy logic system is designed for HPSPACO that expands task scheduling in the cloud environment. A fuzzy method is proposed for the inertia weight update of the PSO and pheromone trails update of the PACO. Thus, the proposed Fuzzy Hybrid Particle Swarm Parallel Ant Colony Optimization on cloud computing achieves improved task scheduling by minimizing the execution and waiting time, system throughput, and maximizing resource utilization. 相似文献
International Journal of Computer Vision - Machine learning models are known to perpetuate and even amplify the biases present in the data. However, these data biases frequently do not become... 相似文献