Automated techniques for Arabic content recognition are at a beginning period contrasted with their partners for the Latin and Chinese contents recognition. There is a bulk of handwritten Arabic archives available in libraries, data centers, historical centers, and workplaces. Digitization of these documents facilitates (1) to preserve and transfer the country’s history electronically, (2) to save the physical storage space, (3) to proper handling of the documents, and (4) to enhance the retrieval of information through the Internet and other mediums. Arabic handwritten character recognition (AHCR) systems face several challenges including the unlimited variations in human handwriting and the leakage of large and public databases. In the current study, the segmentation and recognition phases are addressed. The text segmentation challenges and a set of solutions for each challenge are presented. The convolutional neural network (CNN), deep learning approach, is used in the recognition phase. The usage of CNN leads to significant improvements across different machine learning classification algorithms. It facilitates the automatic feature extraction of images. 14 different native CNN architectures are proposed after a set of try-and-error trials. They are trained and tested on the HMBD database that contains 54,115 of the handwritten Arabic characters. Experiments are performed on the native CNN architectures and the best-reported testing accuracy is 91.96%. A transfer learning (TF) and genetic algorithm (GA) approach named “HMB-AHCR-DLGA” is suggested to optimize the training parameters and hyperparameters in the recognition phase. The pre-trained CNN models (VGG16, VGG19, and MobileNetV2) are used in the later approach. Five optimization experiments are performed and the best combinations are reported. The highest reported testing accuracy is 92.88%.
The explosive increase in data demand coupled with the rapid deployment of various wireless access technologies have led to the increase of number of multi-homed or multi-interface enabled devices. Fully exploiting these interfaces has motivated researchers to propose numerous solutions that aggregate their available bandwidths to increase overall throughput and satisfy the end-user’s growing data demand. These solutions, however, do not utilize their interfaces to the maximum without network support, and more importantly, have faced a steep deployment barrier. In this paper, we propose an optimal deployable bandwidth aggregation system (DBAS) for multi-interface enabled devices. We present the DBAS architecture that does not introduce any intermediate hardware, modify current operating systems, modify socket implementations, nor require changes to current applications or legacy servers. The DBAS architecture is designed to automatically estimate the characteristics of applications and dynamically schedule various connections and/or packets to different interfaces. We also formulate our optimal scheduler as a mixed integer programming problem yielding an efficient solution. We evaluate DBAS via implementation on the Windows OS and further verify our results with simulations on NS2. Our evaluation shows that, with current Internet characteristics, DBAS reaches the throughput upper bound with no modifications to legacy servers. It also highlights the significant enhancements in the response time introduced by DBAS, which directly enhances the user experience. 相似文献
In recent years, bit-precise reasoning has gained importance in hardware and software verification. Of renewed interest is the use of symbolic reasoning for synthesising loop invariants, ranking functions, or whole program fragments and hardware circuits. Solvers for the quantifier-free fragment of bit-vector logic exist and often rely on SAT solvers for efficiency. However, many techniques require quantifiers in bit-vector formulas to avoid an exponential blow-up during construction. Solvers for quantified formulas usually flatten the input to obtain a quantified Boolean formula, losing much of the word-level information in the formula. We present a new approach based on a set of effective word-level simplifications that are traditionally employed in automated theorem proving, heuristic quantifier instantiation methods used in SMT solvers, and model finding techniques based on skeletons/templates. Experimental results on two different types of benchmarks indicate that our method outperforms the traditional flattening approach by multiple orders of magnitude of runtime. 相似文献
This work introduces a probabilistic model allowing to compute reputation scores as close as possible to their intrinsic value, according to the model. It is based on the following, natural, consumer-provider interaction model. Consumers are assumed to order items from providers, who each has some intrinsic, latent, “quality of service” score. In the basic model, the providers supply the items with a quality following a normal law, centered on their intrinsic “quality of service”. The consumers, after the reception and the inspection of the item, rate it according to a linear function of its quality - a standard regression model. This regression model accounts for the bias of the consumer in providing ratings as well as his reactivity towards changes in item quality. Moreover, the constancy of the provider in supplying an equal quality level when delivering the items is estimated by the standard deviation of his normal law of item quality generation. Symmetrically, the consistency of the consumer in providing similar ratings for a given quality is quantified by the standard deviation of his normal law of ratings generation. Two extensions of this basic model are considered as well: a model accounting for truncation of the ratings and a Bayesian model assuming a prior distribution on the parameters. Expectation-maximization algorithms, allowing to estimate the parameters based on the ratings, are developed for all the models. The experiments suggest that these models are able to extract useful information from the ratings, are robust towards adverse behaviors such as cheating, and are competitive in comparison with standard methods. Even if the suggested models do not show considerable improvements over other competing models (such as Brockhoff and Skovgaard’s model [12]), they, however, also permit to estimate interesting features over the raters - such as their reactivity, bias, consistency, reliability, or expectation. 相似文献
A field theory is constructed in the context of parameterized absolute parallelism geometry. The theory is shown to be a pure gravity one. It is capable of describing the gravitational field and a material distribution in terms of the geometric structure of the geometry used (the parallelization vector fields). Three tools are used to attribute physical properties to the geometric objects admitted by the theory. Poisson and Laplace equations are obtained in the linearized version of the theory. The spherically symmetric solution of the theory, in free space, is found to coincide with the Schwarzschild exterior solution of general relativity. The theory respects the weak equivalence principle in free space only. Gravity and the material distribution are not minimally coupled. 相似文献
During industrial forging of hot metallic shells, it is necessary to regularly measure the dimensions of the parts, especially
the inner and outer diameters and the thickness of the walls. A forging sequence lasts 2 h or more during which the diameter
of the shell is regularly measured in order to decide when to stop the forging process. For better working conditions, for
the safety of the blacksmiths, and for a faster and more accurate measurement, we have developed a novel system based on two
commercially available time of flight laser scanners for the measurement of the diameters of hot cylindrical metallic shells
during the forging process. The advantages of using laser scanners are that they can be placed very far from the hot shell,
more than 15 m, while at the same time giving an accurate point cloud from which three-dimensional views of the shell can
be reconstructed and diameter measurements done. Moreover, more accurate measurement is achieved in less time with the laser
system than with the conventional method using a large ruler. The system has been successfully used to measure the diameters
of hot cylindrical metallic shells. 相似文献
Quality of service (QoS) provisioning generally assumes more than one QoS measure that implies that QoS routing can be categorized
as an instance of routing subject to multiple constraints: delay jitter, bandwidth, cost, etc. We study the problem of constructing
multicast trees to meet the QoS requirements of real-time interactive applications where it is necessary to provide bounded
delays and bounded delay variation among the source and all destinations while keeping overall cost of the multicast tree
low. The main contribution of our work is a new strategy for constructing multiconstrained multicast trees. We first derive
mathematically a new delay-variation estimation scheme and prove its efficiency. Thereafter, we propose a simple and competitive
(in terms of running time) heuristic algorithm, for delay and delay variation constrained routing problem based on the proposed
delay-variation estimation scheme and using the Extended Prim-Dijkstra tradeoffs’ algorithm. Our contribution also extends
previous works in providing some properties and analyses of delay bounded paths satisfying delay variation constraints. Extensive
simulation results show that our algorithm outperforms DVDMR in terms of multicast delay variation with the same time complexity
as DVDMR. 相似文献
Multimedia Tools and Applications - Changes in appearance present a tremendous problem for the visual localization of an autonomous vehicle in outdoor environments. Data association between the... 相似文献
Using a meta-analytic approach, we recently reported that the rate of decline in maximal oxygen uptake (VO2 max) with age in healthy women is greatest in the most physically active and smallest in the least active when expressed in milliliters per kilogram per minute per decade. We tested this hypothesis prospectively under well-controlled laboratory conditions by studying 156 healthy, nonobese women (age 20-75 yr): 84 endurance-trained runners (ET) and 72 sedentary subjects (S). ET were matched across the age range for age-adjusted 10-km running performance. Body mass was positively related with age in S but not in ET. Fat-free mass was not different with age in ET or S. Maximal respiratory exchange ratio and rating of perceived exertion were similar across age in ET and S, suggesting equivalent voluntary maximal efforts. There was a significant but modest decline in running mileage, frequency, and speed with advancing age in ET. VO2 max (ml . kg-1 . min-1) was inversely related to age (P < 0.001) in ET (r = -0.82) and S (r = -0.71) and was higher at any age in ET. Consistent with our meta-analysic findings, the absolute rate of decline in VO2 max was greater in ET (-5.7 ml . kg-1 . min-1 . decade-1) compared with S (-3.2 ml . kg-1 . min-1 . decade-1; P < 0. 01), but the relative (%) rate of decline was similar (-9.7 vs -9. 1%/decade; not significant). The greater absolute rate of decline in VO2 max in ET compared with S was not associated with a greater rate of decline in maximal heart rate (-5.6 vs. -6.2 beats . min-1 . decade-1), nor was it related to training factors. The present cross-sectional findings provide additional evidence that the absolute, but not the relative, rate of decline in maximal aerobic capacity with age may be greater in highly physically active women compared with their sedentary healthy peers. This difference does not appear to be related to age-associated changes in maximal heart rate, body composition, or training factors. 相似文献