Multimedia Tools and Applications - This paper suggests an IoT based smart farming system along with an efficient prediction method called WPART based on machine learning techniques to predict crop... 相似文献
Automated techniques for Arabic content recognition are at a beginning period contrasted with their partners for the Latin and Chinese contents recognition. There is a bulk of handwritten Arabic archives available in libraries, data centers, historical centers, and workplaces. Digitization of these documents facilitates (1) to preserve and transfer the country’s history electronically, (2) to save the physical storage space, (3) to proper handling of the documents, and (4) to enhance the retrieval of information through the Internet and other mediums. Arabic handwritten character recognition (AHCR) systems face several challenges including the unlimited variations in human handwriting and the leakage of large and public databases. In the current study, the segmentation and recognition phases are addressed. The text segmentation challenges and a set of solutions for each challenge are presented. The convolutional neural network (CNN), deep learning approach, is used in the recognition phase. The usage of CNN leads to significant improvements across different machine learning classification algorithms. It facilitates the automatic feature extraction of images. 14 different native CNN architectures are proposed after a set of try-and-error trials. They are trained and tested on the HMBD database that contains 54,115 of the handwritten Arabic characters. Experiments are performed on the native CNN architectures and the best-reported testing accuracy is 91.96%. A transfer learning (TF) and genetic algorithm (GA) approach named “HMB-AHCR-DLGA” is suggested to optimize the training parameters and hyperparameters in the recognition phase. The pre-trained CNN models (VGG16, VGG19, and MobileNetV2) are used in the later approach. Five optimization experiments are performed and the best combinations are reported. The highest reported testing accuracy is 92.88%.
The fundamental challenge in opportunistic networking, regardless of the application, is when and how to forward a message. Rank-based forwarding techniques currently represent one of the most promising methods for addressing this message forwarding challenge. While these techniques have demonstrated great efficiency in performance, they do not address the rising concern of fairness amongst various nodes in the network. Higher ranked nodes typically carry the largest burden in delivering messages, which creates a high potential of dissatisfaction amongst them. In this paper, we adopt a real-trace driven approach to study and analyze the trade-offs between efficiency, cost, and fairness of rank-based forwarding techniques in mobile opportunistic networks.Our work comprises three major contributions. First, we quantitatively analyze the trade-off between fair and efficient environments. Second, we demonstrate how fairness coupled with efficiency can be achieved based on real mobility traces. Third, we propose FOG, a real-time distributed framework to ensure efficiency–fairness trade-off using local information. Our framework, FOG, enables state-of-the-art rank-based opportunistic forwarding algorithms to ensure a better fairness–efficiency trade-off while maintaining a low overhead. Within FOG, we implement two real-time distributed fairness algorithms; Proximity Fairness Algorithm (PFA), and Message Context Fairness Algorithm (MCFA). Our data-driven experiments and analysis show that mobile opportunistic communication between users may fail with the absence of fairness in participating high-ranked nodes, and an absolute fair treatment of all users yields inefficient communication performance. Finally our analysis shows that FOG-based algorithms ensure relative equality in the distribution of resource usage among neighbor nodes while keeping the success rate and cost performance near optimal. 相似文献
E-splines (Exponential spline) polynomials represent the best smooth transition between continuous and discrete domains. As they are constructed from convolution of exponential segments, there are many degrees of freedom to optimally choose the most convenient E-spline, suitable for a specific application. In this paper, the parameters of these E-splines were optimally chosen, to enhance the performance of image zooming and interpolation schemes. The proposed technique is based on minimizing the total variation function of the detail coefficients of the E-spline based wavelet decomposition. In zooming applications, the quality of interpolated images are further improved and sharpened by applying ICA technique to them, in order to remove any dependency. Illustrative examples are given to verify image enhancement of the proposed E-spline scheme, when compared with the existing approaches. 相似文献
Despite various industry reports about the failure rates of software projects, there's still uncertainty about the actual figures. Researchers performed a global Web survey of IT departments in 2005 and 2007. The results suggest that the software crisis is perhaps exaggerated and that most software projects deliver. However, the overall project failure rate, including cancelled and completed but poorly performing projects, remains arguably high for an applied discipline. 相似文献
In distributed data mining, adopting a flat node distribution model can affect scalability. To address the problem of modularity, flexibility and scalability, we propose a Hierarchically-distributed Peer-to-Peer (HP2PC) architecture and clustering algorithm. The architecture is based on a multi-layer overlay network of peer neighborhoods. Supernodes, which act as representatives of neighborhoods, are recursively grouped to form higher level neighborhoods. Within a certain level of the hierarchy, peers cooperate within their respective neighborhoods to perform P2P clustering. Using this model, we can partition the clustering problem in a modular way across neighborhoods, solve each part individually using a distributed K-means variant, then successively combine clusterings up the hierarchy where increasingly more global solutions are computed. In addition, for document clustering applications, we summarize the distributed document clusters using a distributed keyphrase extraction algorithm, thus providing interpretation of the clusters. Results show decent speedup, reaching 165 times faster than centralized clustering for a 250-node simulated network, with comparable clustering quality to the centralized approach. We also provide comparison to the P2P K-means algorithm and show that HP2PC accuracy is better for typical hierarchy heights. Results for distributed cluster summarization match those of their centralized counterparts with up to 88% accuracy. 相似文献
The main goal of this study is to apply a scientific quantitative approach to the investigation of contextual fit. This is
approached mathematically within the framework of cognitive science and research on categorization and prototypes. Two experiments
investigated two leading mathematical-cognitive approaches for explaining people’s judgment of contextual fit of a new building
with an architectural/urban context: prototype approach and feature frequency approach. The basic concept is that people represent
the built environment via architectural prototypes and/or frequencies of encountered architectural features. In the first
experiment, a group of twelve participants performed rank order tasks on artificially created architectural patterns, for
the purpose of psychological scaling. Perceptual distances among all patterns were mathematically determined. In the second
experiment, three groups of architectural patterns were constructed to represent assumed architectural contexts. The prototype
of each context was mathematically determined according to prototype cognitive model, and based on the distances calculated
in the first experiment. Fifty-six students participated in the main experiment, in which they rank ordered a group of fifteen
architectural patterns in terms of contextual fit to each of the three architectural contexts. Participants’ rank order data
of the fifteen patterns were regressed on both the perceptual distances from prototypes, and numbers of features shared with
each architectural context. Results indicated that both prototype and feature frequency approaches significantly accounted
for important portions of participants’ judgments. However, participants tended to prefer one approach to the other according
to context composition. Results have implications for both research on utilizing cognitive-mathematical models in architectural
research and on urban design guidelines and control. 相似文献
Nowadays, tandem structures have become a valuable competitor to conventional silicon solar cells, especially for perovskite over silicon, as metal halides surpassed Si with tunable bandgaps, high absorption coefficient, low deposition, and preparation costs. This led to a remarkable enhancement in the overall efficiency of the whole cell and its characteristics. Consequently, this expands the usage of photovoltaic technology in various fields of applications not only under conventional light source spectrum in outdoor areas, i.e., AM1.5G, but also under artificial light sources found indoors with broadband intensity values, such as Internet of things (IoTs) applications to name a few. We introduce a numerical model to analyze perovskite/Si tandem cells (PSSTCs) using both crystalline silicon (c-Si) and hydrogenated amorphous silicon (a-Si:H) experimentally validated as base cells. All proposed layers have been studied with J-V characteristics and energy band diagrams under AM1.5G by using SCAPS-1D software version 3.7.7. Thereupon, the proposed architectures were tested under various artificial lighting spectra. The proposed structures of Li4Ti5O12/CsPbCl3/MAPbBr3/CH3NH3PbI3/Si recorded a maximum power conversion efficiency (PCE) of 25.25% for c-Si and 17.02% for a-Si:H, with nearly 7% enhancement concerning the Si bare cell in both cases. 相似文献
With the increased advancements of smart industries, cybersecurity has become a vital growth factor in the success of industrial transformation. The Industrial Internet of Things (IIoT) or Industry 4.0 has revolutionized the concepts of manufacturing and production altogether. In industry 4.0, powerful Intrusion Detection Systems (IDS) play a significant role in ensuring network security. Though various intrusion detection techniques have been developed so far, it is challenging to protect the intricate data of networks. This is because conventional Machine Learning (ML) approaches are inadequate and insufficient to address the demands of dynamic IIoT networks. Further, the existing Deep Learning (DL) can be employed to identify anonymous intrusions. Therefore, the current study proposes a Hunger Games Search Optimization with Deep Learning-Driven Intrusion Detection (HGSODL-ID) model for the IIoT environment. The presented HGSODL-ID model exploits the linear normalization approach to transform the input data into a useful format. The HGSO algorithm is employed for Feature Selection (HGSO-FS) to reduce the curse of dimensionality. Moreover, Sparrow Search Optimization (SSO) is utilized with a Graph Convolutional Network (GCN) to classify and identify intrusions in the network. Finally, the SSO technique is exploited to fine-tune the hyper-parameters involved in the GCN model. The proposed HGSODL-ID model was experimentally validated using a benchmark dataset, and the results confirmed the superiority of the proposed HGSODL-ID method over recent approaches. 相似文献