首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A comprehensive online unconstrained Chinese handwriting dataset, SCUT-COUCH2009, is introduced in this paper. As a revision of SCUT-COUCH2008 [1], the SCUT-COUCH2009 database consists of more datasets with larger vocabularies and more writers. The database is built to facilitate the research of unconstrained online Chinese handwriting recognition. It is comprehensive in the sense that it consists of 11 datasets of different vocabularies, named GB1, GB2, TradGB1, Big5, Pinyin, Letters, Digit, Symbol, Word8888, Word17366 and Word44208. In particular, the SCUT-COUCH2009 database contains handwritten samples of 6,763 single Chinese characters in the GB2312-80 standard, 5,401 traditional Chinese characters of the Big5 standard, 1,384 traditional Chinese characters corresponding to the level 1 characters of the GB2312-80 standard, 8,888 frequently used Chinese words, 17,366 daily-used Chinese words, 44,208 complete words from the Fourth Edition of “The Contemporary Chinese Dictionary”, 2,010 Pinyin and 184 daily-used symbols. The samples were collected using PDAs (Personal Digit Assistant) and smart phones with touch screens and were contributed by more than 190 persons. The total number of character samples is over 3.6 million. The SCUT-COUCH2009 database is the first publicly available large vocabulary online Chinese handwriting database containing multi-type character/word samples. We report some evaluation results on the database using state-of-the-art recognizers for benchmarking.  相似文献   

2.
We proposed a novel text summarization model based on 0–1 non-linear programming problem. This proposed model covers the main content of the given document(s) through sentence assignment. We implemented our model on multi-document summarization task. When comparing our method to several existing summarization methods on an open DUC2001 and DUC2002 datasets, we found that the proposed method could improve the summarization results significantly. The methods were evaluated using ROUGE-1, ROUGE-2 and ROUGE-W metrics.  相似文献   

3.
Language Resources and Evaluation - The last two decades witnessed a rapid growth of publicly accessible online language resources. This has allowed for valuable data on lesser known languages to...  相似文献   

4.
This paper presents a project to align and match bilingual English–Chinesenews files downloaded from the China News Services website.The work involves the alignment of bilingual texts at the sentence andclause levels. It addition, the work also requires matching of filesas the English and Chinese news files downloaded from the web do notcome in the same sequential order. These news files have their owncharacteristics and, furthermore, the issue of file-matching has itsunique difficulties apart from the known problems of alignment workpreviously reported in the literature. To align the news files wecombine the criteria of anchors (i.e. unambiguous correspondingtext elements) and sentence length. We employ Dynamic Programming first toalign at the paragraph level, then to align at the sentence-clauselevel. The precision and recall of the alignment are satisfactory forfree translation texts. To match English and Chinese files, we make useof the anchor alone. In file matching we encounter a collision problem due to contending matching candidates, andpropose a recursive splitting algorithm to resolve the problem. Weallow human intervention to improve the precision of matching, andsucceeded in achieving 100% precision with a fairly small amount ofmanual effort. Finally, to determine the various parameters used inaligning and matching, we utilize a Genetic Algorithm software packageto obtain their optimized values.  相似文献   

5.
We are currently on a verge of a revolution in digital photography. Developments in computational imaging and adoption of artificial intelligence have spawned new editing techniques that give impressive results in astonishingly short time-frames. The advent of multi-sensor and multi-lens cameras will further challenge many existing integrity verification techniques. As a result, it will be necessary to re-evaluate our notion of image authenticity and look for new techniques that could work efficiently in this new reality. The goal of this paper is to thoroughly review existing techniques for protection and verification of digital image integrity. In contrast to other recent surveys, the discussion covers the most important developments both in active protection and in passive forensic analysis techniques. Existing approaches are analyzed with respect to their capabilities, fundamental limitations, and prospective attack vectors. Whenever possible, the discussion is supplemented with real operation examples and a list of available implementations. Finally, the paper reviews resources available in the research community, including public data-sets and commercial or open-source software. The paper concludes by discussing relevant developments in computational imaging and highlighting future challenges and open research problems.  相似文献   

6.
7.
This paper presents a prediction method using a parallel–hierarchical (PH) network and hyperbolic smoothing of empirical data. The average prediction error is 0.55% for the developed method and 1.62% for neural networks; therefore, this method is more efficient as applied to real-time systems than traditional neural networks due to the use of the PH network and hyperbolic smoothing in implementing the operation of predicting the positions of energy centers of laser beam spot images for optical communication systems.  相似文献   

8.
Copy–move image forgery detection has recently become a very active research topic in blind image forensics. In copy–move image forgery, a region from some image location is copied and pasted to a different location of the same image. Typically, post-processing is applied to better hide the forgery. Using keypoint-based features, such as SIFT features, for detecting copy–move image forgeries has produced promising results. The main idea is detecting duplicated regions in an image by exploiting the similarity between keypoint-based features in these regions. In this paper, we have adopted keypoint-based features for copy–move image forgery detection; however, our emphasis is on accurate and robust localization of duplicated regions. In this context, we are interested in estimating the transformation (e.g., affine) between the copied and pasted regions more accurately as well as extracting these regions as robustly by reducing the number of false positives and negatives. To address these issues, we propose using a more powerful set of keypoint-based features, called MIFT, which shares the properties of SIFT features but also are invariant to mirror reflection transformations. Moreover, we propose refining the affine transformation using an iterative scheme which improves the estimation of the affine transformation parameters by incrementally finding additional keypoint matches. To reduce false positives and negatives when extracting the copied and pasted regions, we propose using “dense” MIFT features, instead of standard pixel correlation, along with hysteresis thresholding and morphological operations. The proposed approach has been evaluated and compared with competitive approaches through a comprehensive set of experiments using a large dataset of real images (i.e., CASIA v2.0). Our results indicate that our method can detect duplicated regions in copy–move image forgery with higher accuracy, especially when the size of the duplicated region is small.  相似文献   

9.
10.
ALIS (Automated Library Information System) comprises a catalogue search system combined with a circulation control system with bar-code readers. The catalogue search part of ALIS makes it possible for the user via VDU's or typewriter terminals to search for literature in DTB's data base using as search criteria author names, keywords in titles, UDC numbers (Universal Decimal Classification System), ISBN, ISSN, CODEN, language codes, year of publication or combinations of these.In consequence of the combination of the search system and the circulation control system the user may obtain not only the bibliographic records for the retrieved documents but also information about their immediate status (“on shelf”, “on loan”, “on reservation” etc.).The bibliographic data base of ALIS, with more than 125,000 bibliographic records, is placed at a service centre in Copenhagen (Datacentralen af 1959) and the circulation control part of ALIS is placed in a minicomputer in DTB. The search system is based on STAIRS/CICS, but the minicomputer enables the user to choose between the English STAIRS dialogue and the Danish dialogue with extended commands and ordering facilities.The system is accessible by direct call or via EURONET, SCANNET or DTH-net (the local university network).  相似文献   

11.
Open data is becoming increasingly popular in a wide range of service domains; however, most open datasets in Taiwan remain separate. The lack of linked open data (LOD) makes it difficult to locate and combine open datasets for the creation of innovative applications. In this study, we sought to facilitate the spread of open data in Taiwan using a novel approach referred to as define–produce–invoke (DPI). The proposed scheme employs a newly defined data query language, called LODQL (LOD query language), to allow the definition of rules for the generation of LOD by data experts. We also developed an LOD engine, which is able to produce linked open datasets and allow application developers to access LOD by invoking RESTful services. This scheme also allows data visualizations indicating the relevance of open datasets and the associations among open data items. Experiments demonstrate the feasibility and effectiveness of the proposed DPI approach.  相似文献   

12.
The CROHME competitions have helped organize the field of handwritten mathematical expression recognition. This paper presents the evolution of the competition over its first 4 years, and its contributions to handwritten math recognition, and more generally structural pattern recognition research. The competition protocol, evaluation metrics and datasets are presented in detail. Participating systems are analyzed and compared in terms of the central mathematical expression recognition tasks: (1) symbol segmentation, (2) classification of individual symbols, (3) symbol relationships and (4) structural analysis (parsing). The competition led to the development of label graphs, which allow recognition results with conflicting segmentations to be directly compared and quantified using Hamming distances. We introduce structure confusion histograms that provide frequencies for incorrect subgraphs corresponding to ground-truth label subgraphs of a given size and present structure confusion histograms for symbol bigrams (two symbols with a relationship) for CROHME 2014 systems. We provide a novel analysis combining results from competing systems at the level of individual strokes and stroke pairs; this virtual merging of system outputs allows us to more closely examine limitations for current state-of-the-art systems. Datasets along with evaluation and visualization tools produced for the competition are publicly available.  相似文献   

13.
This study focuses on numerical integration of constitutive laws in numerical modeling of cold materials processing that involves large plastic strain together with ductile damage. A mixed velocity–pressure formulation is used to handle the incompressibility of plastic deformation. A Lemaitre damage model where dissipative phenomena are coupled is considered. Numerical aspects of the constitutive equations are addressed in detail. Three integration algorithms with different levels of coupling of damage with elastic–plastic behavior are presented and discussed in terms of accuracy and computational cost. The implicit gradient formulation with a non-local damage variable is used to regularize the localization phenomenon and thus to ensure the objectivity of numerical results for damage prediction problems. A tensile test on a plane plate specimen, where damage and plastic strain tend to localize in well-known shear bands, successfully shows both the objectivity and effectiveness of the developed approach.  相似文献   

14.
This paper is concerned with short-term (up to 24 h) operational planning in combined heat and power plants for district energy applications. In such applications, heat and power demands fluctuate on an hourly basis due to changing weather conditions, time-of-day factors and consumer requirements. Plant energy efficiency is highly dependent on ambient temperature and operating load since equipment efficiencies are nonlinear functions of these parameters. In operational planning strategies, nonlinear equipment characteristics are seldom taken into account, resulting in plants being operated at sub-par efficiencies. In order to operate plants at highest possible efficiencies, scheduling strategies which take into account nonlinear equipment characteristics need to be developed. For such strategies, a mixed 0–1 nonlinear programming formulation is proposed. The problem is nonconvex and hence global optimality conditions are unknown. Classical techniques like branch-and-bound may not produce integer feasible solutions, may cut off the global optima and have an exponential increase in CPU time for a linear increase in planning horizon size. As an alternative, a solution method through genetic algorithms is proposed in which genetic search is applied only on 0–1 variables and gradient search is applied on continuous variables. The proposed method is a nonlinear extension of the one originally developed by Sakawa et al. [Sakawa M, Kato K, Ushiro S. Operational planning of district heating and cooling plants through genetic algorithms for mixed 0–1 linear programming. Eur J Operat Res 2002;137(3):677–87]. Numerical experiments show the proposed genetic algorithm method is more consistent in finding integer feasible solutions, finds solutions with lower optimality gaps and has reasonable CPU time as compared to branch-and-bound. From an application perspective, the proposed scheduling strategy results in 5–11% increase in plant energy efficiency.  相似文献   

15.
The problem of ‘information content’ of an information system appears elusive. In the field of databases, the information content of a database has been taken as the instance of a database. We argue that this view misses two fundamental points. One is a convincing conception of the phenomenon concerning information in databases, especially a properly defined notion of ‘information content’. The other is a framework for reasoning about information content. In this paper, we suggest a modification of the well known definition of ‘information content’ given by Dretske(Knowledge and the flow of information,1981). We then define what we call the ‘information content inclusion’ relation (IIR for short) between two random events. We present a set of inference rules for reasoning about information content, which we call the IIR Rules. Then we explore how these ideas and the rules may be used in a database setting to look at databases and to derive otherwise hidden information by deriving new relations from a given set of IIR. A prototype is presented, which shows how the idea of IIR-Reasoning might be exploited in a database setting including the relationship between real world events and database values.
Malcolm CroweEmail:
  相似文献   

16.
In this paper, a novel region of interest (ROI) query method is proposed for image retrieval by combining a mean shift tracking (MST) algorithm and an improved expectation–maximisation (EM)-like (IEML) method. In the proposed combination, the MST is used to seek the initial location of the target candidate model and then IEML is used to adaptively change the location and scale of the target candidate model to include the relevant region and exclude the irrelevant region as far as possible. In order to improve the performance and effectiveness using IEML to track the target candidate model, a new similarity measure is built based on spatial and colour features and a new image retrieval framework for this new environment is proposed. Extensive experiments confirm that compared with the latest developed approaches, such as the generalized Hough transform (GHT) and EM-like tracking methods, our method can provide a much better performance in effectiveness. On the other hand, for the IEML, the new similarity measure model also substantially decreases computational complexity and improves the precision tracking of the target candidate model. Compared with the conventional ROI-based image retrieval methods, the most significant highlight is that the proposed method can directly find the target candidate model in the candidate image without pre-segmentation in advance.  相似文献   

17.

This paper presents a teaching method applied in a usability research course that is part of a bachelor degree programme at the Academy of Fine Arts in Katowice. The method employs visualisation techniques of the user–website interaction, a design practice popular in other fields, but less often used in usability studies. The theoretical background of the data visualisation method, as well as examples of its use in research, are presented and discussed in this paper. In addition to presenting the method, the paper evaluates and analyses how students have responded to it. Using the technology acceptance model, we identified the perceived usability of the method as the main factor influencing students’ behavioural intention to use it in the future.

  相似文献   

18.

Orthogonal moments and their invariants to geometric transformations for gray-scale images are widely used in many pattern recognition and image processing applications. In this paper, we propose a new set of orthogonal polynomials called adapted Gegenbauer–Chebyshev polynomials (AGC). This new set is used as a basic function to define the orthogonal adapted Gegenbauer–Chebyshev moments (AGCMs). The rotation, scaling, and translation invariant property of (AGCMs) is derived and analyzed. We provide a novel series of feature vectors of images based on the adapted Gegenbauer–Chebyshev orthogonal moments invariants (AGCMIs). We practice a novel image classification system using the proposed feature vectors and the fuzzy k-means classifier. A series of experiments is performed to validate this new set of orthogonal moments and compare its performance with the existing orthogonal moments as Legendre invariants moments, the Gegenbauer and Tchebichef invariant moments using three different image databases: the MPEG7-CE Shape database, the Columbia Object Image Library (COIL-20) database and the ORL-faces database. The obtained results ensure the superiority of the proposed AGCMs over all existing moments in representation and recognition of the images.

  相似文献   

19.
In this paper a mixed 0–1 nonlinear model for the Collision Avoidance problem in Air Traffic Management is presented. The aim of the problem consists of deciding the best strategy for an arbitrary aircraft configuration such that all conflicts in the airspace are avoided where a conflict is the loss of the minimum safety distance that two aircraft have to keep in their flight plans. The optimization model is based on geometric constructions. It requires knowing the initial flight plan (coordinates, angles and velocities in each period). The objective is the minimization of the acceleration variations when the aircraft are forced to return to the original flight plan once there is no aircraft in conflict. A linear approximation by using iteratively Taylor polynomials is presented to solve the problem in mixed 0–1 linear terms. An extensive computational experience for a testbed of large-scale instances is reported.  相似文献   

20.
The brain–computer interface (BCI) has made remarkable progress in the bridging the divide between the brain and the external environment to assist persons with severe disabilities caused by brain impairments. There is also continuing philosophical interest in BCIs which emerges from thoughtful reflection on computers, machines, and artificial intelligence. This article seeks to apply BCI perspectives to examine, challenge, and work towards a possible resolution to a persistent problem in the mind–body relationship, namely dualism. The original humanitarian goals of BCIs and the technological inventiveness result in BCIs being surprisingly useful. We begin from the neurologically impaired person, the problems encountered, and some pioneering responses from computers and machines. Secondly, the interface of mind and brain is explored via two points of clarification: direct and indirect BCIs, and the nature of thoughts. Thirdly, dualism is beset by mind–body interaction difficulties and is further questioned by the phenomena of intentions, interactions, and technology. Fourthly, animal minds and robots are explored in BCI settings again with relevance for dualism. After a brief look at other BCIs, we conclude by outlining a future BCI philosophy of brain and mind, which might appear ominous and could be possible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号