The need to protect the environment and biodiversity and to safeguard public health require the development of timely and reliable methods for the identification of particularly dangerous invasive species, before they become regulators of ecosystems. These species appear to be morphologically similar, despite their strong biological differences, something that complicates their identification process. Additionally, the localization of the broader space of dispersion and the development of invasive species are considered to be of critical importance in the effort to take proper management measures. The aim of this research is to create an advanced computational intelligence system for the automatic recognition, of invasive or another unknown species. The identification is performed based on the analysis of environmental DNA by employing machine learning methods. More specifically, this research effort proposes a hybrid bio-inspired computational intelligence detection approach. It employs extreme learning machines combined with an evolving Izhikevich spiking neuron model for the automated identification of the invasive fish species “Lagocephalus sceleratus” extremely dangerous for human health.
The analysis of air quality and the continuous monitoring of air pollution levels are important subjects of the environmental science and research. This problem actually has real impact in the human health and quality of life. The determination of the conditions which favor high concentration of pollutants and most of all the timely forecast of such cases is really crucial, as it facilitates the imposition of specific protection and prevention actions by civil protection. This research paper discusses an innovative threefold intelligent hybrid system of combined machine learning algorithms HISYCOL (henceforth). First, it deals with the correlation of the conditions under which high pollutants concentrations emerge. On the other hand, it proposes and presents an ensemble system using combination of machine learning algorithms capable of forecasting the values of air pollutants. What is really important and gives this modeling effort a hybrid nature is the fact that it uses clustered datasets. Moreover, this approach improves the accuracy of existing forecasting models by using unsupervised machine learning to cluster the data vectors and trace hidden knowledge. Finally, it employs a Mamdani fuzzy inference system for each air pollutant in order to forecast even more effectively its concentrations. 相似文献
The existence of good probabilistic models for the job arrival process and the delay components introduced at different stages
of job processing in a Grid environment is important for the improved understanding of the Grid computing concept. In this
study, we present a thorough analysis of the job arrival process in the EGEE infrastructure and of the time durations a job
spends at different states in the EGEE environment. We define four delay components of the total job delay and model each
component separately. We observe that the job inter-arrival times at the Grid level can be adequately modelled by a rounded
exponential distribution, while the total job delay (from the time it is generated until the time it completes execution)
is dominated by the computing element’s register and queuing times and the worker node’s execution times. Further, we evaluate
the efficiency of the EGEE environment by comparing the job total delay performance with that of a hypothetical ideal super-cluster
and conclude that we would obtain similar performance if we submitted the same workload to a super-cluster of size equal to
34% of the total average number of CPUs participating in the EGEE infrastructure. We also analyze the job inter-arrival times,
the CE’s queuing times, the WN’s execution times, and the data sizes exchanged at the kallisto.hellasgrid.gr cluster, which is node in the EGEE infrastructure. In contrast to the Grid level, we find that at the cluster level the job
arrival process exhibits self-similarity/long-range dependence. Finally, we propose simple and intuitive models for the job
arrival process and the execution times at the cluster level. 相似文献
Content distribution networks (CDNs) improve scalability and reliability, by replicating content to the “edge” of the Internet.
Apart from the pure networking issues of the CDNs relevant to the establishment of the infrastructure, some very crucial data
management issues must be resolved to exploit the full potential of CDNs to reduce the “last mile” latencies. A very important
issue is the selection of the content to be prefetched to the CDN servers. All the approaches developed so far, assume the
existence of adequate content popularity statistics to drive the prefetch decisions. Such information though, is not always
available, or it is extremely volatile, turning such methods problematic. To address this issue, we develop self-adaptive
techniques to select the outsourced content in a CDN infrastructure, which requires no apriori knowledge of request statistics.
We identify clusters of “correlated” Web pages in a site, called Web site communities, and make these communities the basic outsourcing unit. Through a detailed simulation environment, using both real and synthetic
data, we show that the proposed techniques are very robust and effective in reducing the user-perceived latency, performing
very close to an unfeasible, off-line policy, which has full knowledge of the content popularity. 相似文献
Smart card technology has evolved over the last few years following notable improvements in the underlying hardware and software platforms. Advanced smart card microprocessors, along with robust smart card operating systems and platforms, contribute towards a broader acceptance of the technology. These improvements have eliminated some of the traditional smart card security concerns. However, researchers and hackers are constantly looking for new issues and vulnerabilities. In this article we provide a brief overview of the main smart card attack categories and their corresponding countermeasures. We also provide examples of well-documented attacks on systems that use smart card technology (e.g. satellite TV, EMV, proximity identification) in an attempt to highlight the importance of the security of the overall system rather than just the smart card. 相似文献
We propose a probabilistic variant of the pi-calculus as a framework to specify randomized security protocols and their intended properties. In order to express and verify the correctness of the protocols, we develop a probabilistic version of the testing semantics. We then illustrate these concepts on an extended example: the Partial Secret Exchange, a protocol which uses a randomized primitive, the Oblivious Transfer, to achieve fairness of information exchange between two parties. 相似文献
Effectively exploiting available communication bandwidth and client resources is vital in wireless mobile environments. One technique for doing so is client-side data caching, which helps reduce latency and conserve network resources. The SliCache generic self-tunable cache-replacement policy addresses these issues by using intelligent slicing of the cache space and novel methods for selecting which objects to purge. Performance evaluations show that SliCache improves mobile clients' access to Web objects compared to other common policies. 相似文献