共查询到20条相似文献,搜索用时 15 毫秒
1.
Most Web content categorization methods are based on the vector space model of information retrieval. One of the most important advantages of this representation model is that it can be used by both instance‐based and model‐based classifiers. However, this popular method of document representation does not capture important structural information, such as the order and proximity of word occurrence or the location of a word within the document. It also makes no use of the markup information that can easily be extracted from the Web document HTML tags. A recently developed graph‐based Web document representation model can preserve Web document structural information. It was shown to outperform the traditional vector representation using the k‐Nearest Neighbor (k‐NN) classification algorithm. The problem, however, is that the eager (model‐based) classifiers cannot work with this representation directly. In this article, three new hybrid approaches to Web document classification are presented, built upon both graph and vector space representations, thus preserving the benefits and overcoming the limitations of each. The hybrid methods presented here are compared to vector‐based models using the C4.5 decision tree and the probabilistic Naïve Bayes classifiers on several benchmark Web document collections. The results demonstrate that the hybrid methods presented in this article outperform, in most cases, existing approaches in terms of classification accuracy, and in addition, achieve a significant reduction in the classification time. © 2008 Wiley Periodicals, Inc. 相似文献
2.
In geometric constraint solving, 2D well constrained geometric problems can be abstracted as Laman graphs. If the graph is tree decomposable, the constraint-based geometric problem can be solved by a Decomposition–Recombination planner based solver. In general decomposition and recombination steps can be completed only when steps on which they are dependent have already been completed. This fact naturally defines a hierarchy in the decomposition–recombination steps that traditional tree decomposition representations do not capture explicitly.In this work we introduce h-graphs, a new representation for decompositions of tree decomposable Laman graphs, which captures dependence relations between different tree decomposition steps. We show how h-graphs help in efficiently computing parameter ranges for which solution instances to well constrained, tree decomposable geometric constraint problems with one degree of freedom can actually be constructed. 相似文献
3.
Blur invariants in the wavelet domain are proposed for the first time in this paper. Wavelet domain blur invariants take advantage of several benefits that this domain provides, i.e. different alternatives for wavelet function and analysis in different scales. It is not required to model the blur system in order to extract the invariants. It will be shown how the space domain blur invariants are a special case of the proposed invariants. It will also be explained how the proposed invariants would not have the null space that their special case in the spatial domain have which limits their discriminative power. The performance of these invariants will be demonstrated through experiments, and compared to its counterpart which is defined in the spatial domain. 相似文献
4.
The objective of the paper is to provide a taxonomy of temporal systems according to three fundamental considerations: the assumed axiomatic theory, the expressiveness, and the mechanisms for inference which are provided. There is an discussion of the significance of the key features of the taxonomy for computer modelling of temporal events. A review considers the most significant representative systems with respect to these issues, including those due to Bruce, Allen and Hayes, Vilain, McDermott, Dechteret al., Kahn and Gorry, Kowalski and Sergot, Bacchuset al., and Knight and Ma. A tabular comparison of systems is given according to their main structural features. In conclusion, the characteristics of a general axiomatic system capable of representing all the features of these models is discussed. 相似文献
5.
《Computers & Education》2002,39(3):261-269
The purpose of this evaluative study was to determine the effectiveness of a hybrid instructional model, called ADAPT (Active Discovery And Participation through Technology) that combines the important features of traditional classroom instruction (classroom, instructor, textbook) with those of computer-mediated instruction (learning by performing rather than listening, frequent assessment and feedback). In combination, the model is distinguished from either distance or traditional instruction, and can be employed in campus computer labs. Both the ADAPT model and the traditional approach were used to teach a 10-week study skills course, the objective of which was to improve students' academic performance, as measured by grade point averages. Results of using the two approaches and comparing them with one another and to a matched control group experiencing neither yielded an overall significant difference as well as significant differences between each condition. Students taught using ADAPT achieved the highest GPAs, relative to past performance, while those not taught achieved the lowest, with conventionally taught students in between. The hybridity of the ADAPT model seemed to provide students both structure and opportunity for involvement in the learning process. 相似文献
6.
Water management practices in southern France (the Crau plain) need to be modified in order to ensure greater water use efficiency and less environmental damage while maintaining hay production levels. Farmers, water managers and policy makers have expressed the need for new methods to deal with these issues. We developed the biodecisional model IRRIGATE to test new irrigation schedules, new designs for water channels or fields and new distribution planning for a given water resource. IRRIGATE simulates the operation of a hay cropping system irrigated by flood irrigation and includes three main features: (i) border irrigation with various durations of irrigation events and various spatial orders of water distribution, (ii) species-rich grasslands highly sensitive to water deficit, (iii) interactions between irrigation and mowing. It is based on existing knowledge, adapted models and new modules based on experiments and survey data. It includes a rule-based model on the farm scale, simulating dynamically both irrigation and mowing management, and two biophysical models. The two biophysical models are a dynamic crop model on the field scale simulating plant and soil behaviour in relation to water supply, and a flood irrigation model on the border scale simulating an irrigation event according to plant and hydraulic parameters. Model outputs allow environmental (water supply, drainage), social (labour) and agronomic (yields, water productivity and irrigation efficiency) analyses of the performance of the cropping system. IRRIGATE was developed using firstly a conceptual framework describing the system modelled as three sub-systems (biophysical, technical, and decision) interacting within the farm. Then a component-based spatially explicit modelling based on the identification of the interactions between modules, the identification of temporal and spatial scales of modules and the re-use of previous models was used to develop the numerical model. An example of the use of the biodecisional model is presented showing the effects on a real farm of a severe water shortage in 2006. 相似文献
7.
In this research, a hybrid model is developed by integrating a case-based data clustering method and a fuzzy decision tree for medical data classification. Two datasets from UCI Machine Learning Repository, i.e., liver disorders dataset and Breast Cancer Wisconsin (Diagnosis), are employed for benchmark test. Initially a case-based clustering method is applied to preprocess the dataset thus a more homogeneous data within each cluster will be attainted. A fuzzy decision tree is then applied to the data in each cluster and genetic algorithms (GAs) are further applied to construct a decision-making system based on the selected features and diseases identified. Finally, a set of fuzzy decision rules is generated for each cluster. As a result, the FDT model can accurately react to the test data by the inductions derived from the case-based fuzzy decision tree. The average forecasting accuracy for breast cancer of CBFDT model is 98.4% and for liver disorders is 81.6%. The accuracy of the hybrid model is the highest among those models compared. The hybrid model can produce accurate but also comprehensible decision rules that could potentially help medical doctors to extract effective conclusions in medical diagnosis. 相似文献
8.
Layered point clouds: a simple and efficient multiresolution structure for distributing and rendering gigantic point-sampled models 总被引:1,自引:0,他引:1
We recently introduced an efficient multiresolution structure for distributing and rendering very large point sampled models on consumer graphics platforms [1]. The structure is based on a hierarchy of precomputed object-space point clouds, that are combined coarse-to-fine at rendering time to locally adapt sample densities according to the projected size in the image. The progressive block based refinement nature of the rendering traversal exploits on-board caching and object based rendering APIs, hides out-of-core data access latency through speculative prefetching, and lends itself well to incorporate backface, view frustum, and occlusion culling, as well as compression and view-dependent progressive transmission. The resulting system allows rendering of complex out-of-core models at high frame rates (over 60 M rendered points/second), supports network streaming, and is fundamentally simple to implement. We demonstrate the efficiency of the approach on a number of very large models, stored on local disks or accessed through a consumer level broadband network, including a massive 234 M samples isosurface generated by a compressible turbulence simulation and a 167 M samples model of Michelangelo's St. Matthew. Many of the details of our framework were presented in a previous study. We here provide a more thorough exposition, but also significant new material, including the presentation of a higher quality bottom-up construction method and additional qualitative and quantitative results. 相似文献
9.
Temporal considerations play a key role in the planning and operation of a manufacturing system. The development of a temporal reasoning mechanism would facilitate effective and efficient computer-aided process planning and dynamic scheduling. We feel that a temporal system that makes use of the expressive power of the integral language and the computational ease of the point language will be best suited to reasoning about time within the manufacturing system. The concept of a superinterval, or a collection of intervals, is used to augment a hybrid point-interval temporal system. We have implemented a reasoning algorithm that can be used to aid temporal decision making within the manufacturing environment. Using the quantitative results obtained by measuring our program's performance, we show how the superinterval can be used to partition large temporal systems into smaller ones to facilitate distributed processing of the smaller systems. The distributed processing of large temporal systems helps achieve real-time temporal decision-making capabilities. Such a reasoning system will facilitate automation of the planning and scheduling functions within the manufacturing environment and provide the framework for an autonomous production facility. 相似文献
10.
Home health care, i.e. visiting and nursing patients in their homes, is a growing sector in the medical service business. From a staff rostering point of view, the problem is to find a feasible working plan for all nurses that has to respect a variety of hard and soft constraints, and preferences. Additionally, home health care problems contain a routing component: a nurse must be able to visit her patients in a given roster using a car or public transport. It is desired to design rosters that consider both, the staff rostering and vehicle routing components while minimizing transportation costs and maximizing satisfaction of patients and nurses. 相似文献
11.
Shaojun Xie Baisong Pan Xiaoping Du 《Structural and Multidisciplinary Optimization》2017,56(6):1493-1505
Random variables could be dependent, and so could interval variables. To accommodate dependent interval variables, this work develops an efficient hybrid reliability analysis method to handle both random and dependent interval input variables. Due to the dependent interval variables, the reliability analysis needs to perform the probability analysis for every combination of dependent interval variables. This involves a nested double-loop procedure and dramatically decreases the efficiency. The proposed method decomposes the nested double loops into sequential probability analysis (PA) loop and interval analysis (IA) loop. An efficient IA algorithm based on the cut-HDMR (High Dimensional Model Representation) expansion is developed and a sampling strategy with the leave-one-out technique without extra calls of the limit-state function is proposed. The proposed method has good accuracy and efficiency as demonstrated by two engineering examples. 相似文献
12.
In this study, a new multi-criteria classification technique for nominal and ordinal groups is developed by expanding the UTilites Additives DIScriminantes (UTADIS) method with a polynomial of degree T which is used as the utility function rather than using a piecewise linear function as an approximation of the utility function of each attribute. We called this method as PUTADIS. The objective is calculating the coefficients of the polynomial and the threshold limit of classes and weight of attributes such that it minimizes the number of misclassification error. Estimation of unknown parameters of the problem is calculated by using a hybrid algorithm which is a combination of particle swarm optimization algorithm (PSO) and Genetic Algorithm (GA). The results obtained by implementing the model on different datasets and comparing its performance with other previous methods show the high efficiency of the proposed method. 相似文献
13.
Nowadays Network function virtualization (NFV) has drawn immense attention from many cloud providers because of its benefits. NFV enables networks to virtualize node functions such as firewalls, load balancers, and WAN accelerators, conventionally running on dedicated hardware, and instead implements them as virtual software components on standard servers, switches, and storages. In order to provide NFV resources and meet Service Level Agreement (SLA) conditions, minimize energy consumption and utilize physical resources efficiently, resource allocation in the cloud is an essential task. Since network traffic is changing rapidly, an optimized resource allocation strategy should consider resource auto-scaling property for NFV services. In order to scale cloud resources, we should forecast the NFV workload. Existing forecasting methods are providing poor results for highly volatile and fluctuating time series such as cloud workloads. Therefore, we propose a novel hybrid wavelet time series decomposer and GMDH-ELM ensemble method named Wavelet-GMDH-ELM (WGE) for NFV workload forecasting which predicts and ensembles workload in different time-frequency scales. We evaluate the WGE model with three real cloud workload traces to verify its prediction accuracy and compare it with state of the art methods. The results show the proposed method provides better average prediction accuracy. Especially it improves Mean Absolute Percentage Error (MAPE) at least 8% compared to the rival forecasting methods such as support vector regression (SVR) and Long short term memory (LSTM). 相似文献
14.
Mehdi Khashei Ali Zeinal HamadaniMehdi Bijari 《Expert systems with applications》2012,39(3):2606-2620
The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting accuracy is needed. 相似文献
15.
Contextualized videos: combining videos with environment models to support situational understanding 总被引:1,自引:0,他引:1
Wang Y Krum DM Coelho EM Bowman DA 《IEEE transactions on visualization and computer graphics》2007,13(6):1568-1575
Multiple spatially-related videos are increasingly used in security, communication, and other applications. Since it can be difficult to understand the spatial relationships between multiple videos in complex environments (e.g. to predict a person's path through a building), some visualization techniques, such as video texture projection, have been used to aid spatial understanding. In this paper, we identify and begin to characterize an overall class of visualization techniques that combine video with 3D spatial context. This set of techniques, which we call contextualized videos, forms a design palette which must be well understood so that designers can select and use appropriate techniques that address the requirements of particular spatial video tasks. In this paper, we first identify user tasks in video surveillance that are likely to benefit from contextualized videos and discuss the video, model, and navigation related dimensions of the contextualized video design space. We then describe our contextualized video testbed which allows us to explore this design space and compose various video visualizations for evaluation. Finally, we describe the results of our process to identify promising design patterns through user selection of visualization features from the design space, followed by user interviews. 相似文献
16.
Multimedia Tools and Applications - Forest fire poses a serious threat to wildlife, environment, and all mankind. This threat has prompted the development of various intelligent and computer vision... 相似文献
17.
The laser altimetry (LAM) dataset obtained by Chang'E-1 (CE-1) contains about 8.6 million points, and how to use it to model and visualize the lunar surface is a problem. This paper presents a 3D, multiresolution, approximate lunar surface model based on a subdivision-surface wavelet, as well as efficient tools for rendering the three-dimensional surface at speeds approaching real-time interaction in a general Personal Computer (PC) environment. The surface model is C2-continuous at nearly all points. The modeling and visualization method could be applied to most other global data sets. 相似文献
18.
Computation-intensive analyses/simulations are becoming increasingly common in engineering design problems. To improve the computation efficiency, surrogate models are used to replace expensive simulations of engineering problems. This paper proposes a new high-fidelity surrogate modeling approach which is called the Sparsity-promoting Polynomial Response Surface (SPPRS). In the SPPRS model, a series of Legendre polynomials is selected as basis functions, and its number is compatible with the sample size so as to enhance the expression ability for complex functional relationships. The coefficients associated with basis functions are estimated using a “sparsity-promoting” regression approach which is an ensemble of two techniques: least squares and ℓ1-norm regularization. As a result, only these basis functions relevant to explain the function relationship are picked out, and that dedicates to ease the problem of overfitting for training points. With the sparsity-promoting regression approach, such a surrogate model intends to capture both the global trend of the functional variation and a reasonable local accuracy in the neighborhood of training points. Additionally, Latin hypercube design (LHD) is proved conducive to improving the predictive capability of our model. The SPPRS is applied to seven benchmark test functions and a complex engineering problem. The results illustrate the promising benefits of this novel surrogate modeling technique. 相似文献
19.
Applied Intelligence - Currently available Collaborative Filtering(CF) algorithms often utilize user behavior data to generate recommendations. The similarity calculation between users is mostly... 相似文献
20.
This paper describes a novel structural approach to recognize the human facial features for emotion recognition. Conventionally,
features extracted from facial images are represented by relatively poor representations, such as arrays or sequences, with
a static data structure. In this study, we propose to extract facial expression features vectors as Localized Gabor Features
(LGF) and then transform these feature vectors into FacE Emotion Tree Structures (FEETS) representation. It is an extension
of the Human Face Tree Structures (HFTS) representation presented in (Cho and Wong in Lecture notes in computer science, pp
1245–1254, 2005). This facial representation is able to simulate as human perceiving the real human face and both the entities and relationship
could contribute to the facial expression features. Moreover, a new structural connectionist architecture based on a probabilistic
approach to adaptive processing of data structures is presented. The so-called probabilistic based recursive neural network
(PRNN) model extended from Frasconi et al. (IEEE Trans Neural Netw 9:768–785, 1998) is developed to train and recognize human emotions by generalizing the FEETS representation. For empirical studies, we benchmarked
our emotion recognition approach against other well known classifiers. Using the public domain databases, such as Japanese
Female Facial Expression (JAFFE) (Lyons et al. in IEEE Trans Pattern Anal Mach Intell 21(12):1357–1362, 1999; Lyons et al. in third IEEE international conference on automatic face and gesture recognition, 1998) database and Cohn–Kanade AU-Coded Facial Expression (CMU) Database (Cohn et al. in 7th European conference on facial expression
measurement and meaning, 1997), our proposed system might obtain an accuracy of about 85–95% for subject-dependent and subject-independent conditions.
Moreover, by testing images having artifacts, the proposed model significantly supports the robust capability to perform facial
emotion recognition. 相似文献