Multimedia Tools and Applications - The advancement in communication and computation technologies has paved a way for connecting large number of heterogeneous devices to offer specified services.... 相似文献
Automated techniques for Arabic content recognition are at a beginning period contrasted with their partners for the Latin and Chinese contents recognition. There is a bulk of handwritten Arabic archives available in libraries, data centers, historical centers, and workplaces. Digitization of these documents facilitates (1) to preserve and transfer the country’s history electronically, (2) to save the physical storage space, (3) to proper handling of the documents, and (4) to enhance the retrieval of information through the Internet and other mediums. Arabic handwritten character recognition (AHCR) systems face several challenges including the unlimited variations in human handwriting and the leakage of large and public databases. In the current study, the segmentation and recognition phases are addressed. The text segmentation challenges and a set of solutions for each challenge are presented. The convolutional neural network (CNN), deep learning approach, is used in the recognition phase. The usage of CNN leads to significant improvements across different machine learning classification algorithms. It facilitates the automatic feature extraction of images. 14 different native CNN architectures are proposed after a set of try-and-error trials. They are trained and tested on the HMBD database that contains 54,115 of the handwritten Arabic characters. Experiments are performed on the native CNN architectures and the best-reported testing accuracy is 91.96%. A transfer learning (TF) and genetic algorithm (GA) approach named “HMB-AHCR-DLGA” is suggested to optimize the training parameters and hyperparameters in the recognition phase. The pre-trained CNN models (VGG16, VGG19, and MobileNetV2) are used in the later approach. Five optimization experiments are performed and the best combinations are reported. The highest reported testing accuracy is 92.88%.
This article investigates the prediction of the crack growth angle of an existing internal crack under mixed mode loading at the crack tip for an unfilled ethylene propylene diene terpolymer rubber (EPDM). For the realization of mixed mode loading, the cracks of the uniaxial loaded specimens were oriented with different angles to the loading direction. The energy density factor was used as a potential criterion for determining the crack growth angle. The determination of the strain energy density factor was carried out simulatively in Abaqus. The second-order Ogden model was used to describe the rubber-like material behavior. The relative local minimum of the strain energy density factor provides the possible growth angle. The experimental investigations show that the initial cracks grow orthogonally to the loading direction for the different crack orientation angles. For the crack orientation angle parallel to the load direction, the crack growth was observed because the strong stretching of the specimen caused strong necking in the crack region. The crack growth for the remaining crack orientation angles were induced due to shear loading at the crack tip. The predictive angle of different crack orientation angles shows very good accordance to the measured crack growth angles. 相似文献
Major limitation for use of epoxy thermosets in engineering applications is its sudden brittle failure. In the present study dipropylene glycol dibenzoate (DPGDB) based plasticizer is used to modify diglycidyl ether of bisphenol A (DEGEBA) based epoxy resin system via simple blending technique. Bio-based epoxidized linseed oil was also used to modify epoxy resin system and compared with DPGDB modified resin. For DPGDB modified resin storage modulus and loss modulus of the epoxy system modified with 10% plasticizer increased by 7.54% and 12.24%, respectively. The primary mechanism responsible for such behavior is improved crosslinking density. With 5% plasticizer loading, flexural strength increased by 21%. There was an improvement of 312.74% in strain at failure for 10% plasticizer loading, while preserving its mechanical strength. It was found that DPGDB based modification was better than epoxidized linseed oil modification. 相似文献
State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds. 相似文献
As part of information retrieval systems (IRS) and in the context of the use of ontologies for documents and queries indexing, we propose and evaluate in this paper the contribution of this approach applied to Arabic texts. To do this we indexed a corpus of Arabic text using Arabic WordNet. The disambiguation of words was performed by applying the Lesk algorithm. The results obtained by our experiment allowed us to deduct the contribution of this approach in IRS for Arabic texts. 相似文献
Use Case modeling is a popular technique for documenting functional requirements of software systems. Refactoring is the process of enhancing the structure of a software artifact without changing its intended behavior. Refactoring, which was first introduced for source code, has been extended for use case models. Antipatterns are low quality solutions to commonly occurring design problems. The presence of antipatterns in a use case model is likely to propagate defects to other software artifacts. Therefore, detection and refactoring of antipatterns in use case models is crucial for ensuring the overall quality of a software system. Model transformation can greatly ease several software development activities including model refactoring. In this paper, a model transformation approach is proposed for improving the quality of use case models. Model transformations which can detect antipattern instances in a given use case model, and refactor them appropriately are defined and implemented. The practicability of the approach is demonstrated by applying it on a case study that pertains to biodiversity database system. The results show that model transformations can efficiently improve quality of use case models by saving time and effort. 相似文献
To investigate whether the alternative text entry system, Dasher, is useful to physically and intellectually disabled students when controlled by a brain–computer interface (BCI) a new software tool was developed to allow subjects to type words onto a computer screen via Dasher using their thoughts. A case study approach was adopted. Subjects were selected based on their suitability for the experiment, and the potential benefit to them of this system, by their head teacher. Subjects entered literacy level-matched phrases onto a computer using QWERTY keyboard, Dasher-mouse and Dasher-BCI. A researcher recorded qualitative and quantitative data, including characters entered per minute and their system preferences. Informed written consent was given for seven subjects to participate (aged 14–19 years, five male, with a range of physical and intellectual disabilities). After a short training period, all subjects had some degree of control over the Dasher-BCI system. With regard to typing speed, Dasher-BCI performed relatively poorly (3.9 ± 1.5 characters per minute), and QWERTY keyboard performed the best (31.9 ± 21.9 characters per minute). Dasher-BCI was the most preferred method. Areas of weakness in Dasher and the BCI hardware were highlighted and suggestions for improvement given. BCI-based text entry is not yet ready to compete with more established methods for students with combined cognitive and physical disabilities. Although underpowered, this study suggests that for people whose predominant disability is physical (cerebral palsy), BCI technology shows great potential as a viable text entry alternative. Suggestions for further research are discussed. 相似文献