首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The application of the principles of mathematical modelling to measurement theory is reviewed and the implications of a classification of measurement models by the mathematical operations that are applicable to the resultant scales are summarized. The conditions for the applicability of the operations of addition and multiplication imply that the models of classical measurement theory do not enable the application of these operations. For example, measurement without units does not enable the operations of addition and multiplication. Ratios of preference scales are shown to be undefined thus invalidating the analytic hierarchy process. The models of classical measurement theory, including utility theory, are weak and do not enable the operations of addition and multiplication as well.  相似文献   

2.
Internet worms are a significant security threat. Divide-conquer scanning is a simple yet effective technique that can potentially be exploited for future Internet epidemics. Therefore, it is imperative that defenders understand the characteristics of divide-conquer-scanning worms and study the effective countermeasures. In this work, we first examine the divide-conquer-scanning worm and its potential to spread faster and stealthier than a traditional random-scanning worm. We then characterize the relationship between the propagation speed of divide-conquer-scanning worms and the distribution of vulnerable hosts through mathematical analysis and simulations. Specifically, we find that if vulnerable hosts follow a non-uniform distribution such as the Witty-worm victim distribution, divide-conquer scanning can spread a worm much faster than random scanning. We also empirically study the effect of important parameters on the spread of divide-conquer-scanning worms and a worm variant that can potentially enhance the infection ability at the late stage of worm propagation. Furthermore, to counteract such attacks, we discuss the weaknesses of divide-conquer scanning and study two defense mechanisms: infected-host removal and active honeynets. We find that although the infected-host removal strategy can greatly reduce the number of final infected hosts, active honeynets (especially uniformly distributed active honeynets) are more practical and effective to defend against divide-conquer-scanning worms.  相似文献   

3.
Knowledge and Information Systems - Despite the high accuracy offered by state-of-the-art deep natural-language models (e.g., LSTM, BERT), their application in real-life settings is still widely...  相似文献   

4.
In existing Active Access Control (AAC) models, the scalability and flexibility of security policy specification should be well balanced, especially: (1) authorizations to plenty of tasks should be simplified; (2) team workflows should be enabled; (3) fine-grained constraints should be enforced. To address this issue, a family of Association-Based Active Access Control (ABAAC) models is proposed. In the minimal model ABAAC0, users are assigned to roles while permissions are assigned to task-role associations. In a workflow case, to execute such an association some users assigned to its component role will be allocated. The association's assigned permissions can be performed by them during the task is running in the case. In ABAAC1, a generalized association is employed to extract common authorizations from multiple associations. In ABAAC2, a fine-grained separation of duty (SoD) is enforced among associations. In the maximal model ABAAC3, all these features are integrated, and similar constraints can be specified more concisely. Using a software workflow, case validation is performed. Comparison with a representative association based AAC model and the most scalable AAC model so far indicates that: (1) enough scalability is achieved; (2) without decomposition of a task, different permissions can be authorized to multiple roles in it; (3) separation of more fine-grained duties than roles and tasks can be enforced.  相似文献   

5.
This paper analyzes the emergent behaviors of pedestrian groups that learn through the multiagent reinforcement learning model developed in our group. Five scenarios studied in the pedestrian model literature, and with different levels of complexity, were simulated in order to analyze the robustness and the scalability of the model. Firstly, a reduced group of agents must learn by interaction with the environment in each scenario. In this phase, each agent learns its own kinematic controller, that will drive it at a simulation time. Secondly, the number of simulated agents is increased, in each scenario where agents have previously learnt, to test the appearance of emergent macroscopic behaviors without additional learning. This strategy allows us to evaluate the robustness and the consistency and quality of the learned behaviors. For this purpose several tools from pedestrian dynamics, such as fundamental diagrams and density maps, are used. The results reveal that the developed model is capable of simulating human-like micro and macro pedestrian behaviors for the simulation scenarios studied, including those where the number of pedestrians has been scaled by one order of magnitude with respect to the situation learned.  相似文献   

6.
An identification method is described to determine a weighted combination of local linear state-space models from input and output data. Normalized radial basis functions are used for the weights, and the system matrices of the local linear models are fully parameterized. By iteratively solving a non-linear optimization problem, the centres and widths of the radial basis functions and the system matrices of the local models are determined. To deal with the non-uniqueness of the fully parameterized state-space system, a projected gradient search algorithm is described. It is pointed out that when the weights depend only on the input, the dynamical gradient calculations in the identification method are stable. When the weights also depend on the output, certain difficulties might arise. The methods are illustrated using several examples that have been studied in the literature before.  相似文献   

7.
8.
Inductive Logic Programming (ILP) combines rule-based and statistical artificial intelligence methods, by learning a hypothesis comprising a set of rules given background knowledge and constraints for the search space. We focus on extending the XHAIL algorithm for ILP which is based on Answer Set Programming and we evaluate our extensions using the Natural Language Processing application of sentence chunking. With respect to processing natural language, ILP can cater for the constant change in how we use language on a daily basis. At the same time, ILP does not require huge amounts of training examples such as other statistical methods and produces interpretable results, that means a set of rules, which can be analysed and tweaked if necessary. As contributions we extend XHAIL with (i) a pruning mechanism within the hypothesis generalisation algorithm which enables learning from larger datasets, (ii) a better usage of modern solver technology using recently developed optimisation methods, and (iii) a time budget that permits the usage of suboptimal results. We evaluate these improvements on the task of sentence chunking using three datasets from a recent SemEval competition. Results show that our improvements allow for learning on bigger datasets with results that are of similar quality to state-of-the-art systems on the same task. Moreover, we compare the hypotheses obtained on datasets to gain insights on the structure of each dataset.  相似文献   

9.
Mental models and computer modelling   总被引:1,自引:0,他引:1  
Abstract This paper is concerned with the place of mental models in the process of knowledge construction and in particular, the relationship between mental models and computer models in that process. It identifies what is meant by mental models and outlines why they might be seen to be central to the process of acquiring knowledge. In this context, the second part of the paper analyses and discusses children's use of a spreadsheet to build their own computer models. It is suggested that the process of building models on a computer may provide direct support to the cognitive processes of constructing mental models, although the relationship is not straight forward.  相似文献   

10.
Radial basis function networks are traditionally known as local approximation networks as they are composed by a number of elements which, individually, mainly take care of the approximation about a specific area of the input space. Then, the joint global output of the network is obtained as a linear combination of the individual elements' output. However, in the network optimization, the performance of the global model is normally the only objective to optimize. This might cause a deficient local modelling of the input space, thus partially losing the local character of this type of models. This work presents a modified radial basis function network that maintains the approximation capabilities of the local sub-models whereas the model is globally optimized. This property is obtained thanks to a special partitioning of the input space, that leads to a direct global-local optimization. A learning methodology adapted to the proposed model is used in the simulations, consisting of a clustering algorithm for the initialization of the centers and a local search technique. In the experiments, the proposed model shows satisfactory local and global modelling capabilities both in artificial and real applications.  相似文献   

11.
This paper presents local methods for modelling and control of discrete-time unknown non-linear dynamical systems, when only input-output data are available. We propose the adoption of lazy learning, a memory-based technique for local modelling. The modelling procedure uses a query-based approach to select the best model configuration by assessing and comparing different alternatives. A new recursive technique for local model identification and validation is presented, together with an enhanced statistical method for model selection. A lso, three methods to design controllers based on the local linearization provided by the lazy learning algorithm are described. In the first method the lazy technique returns the forward and inverse models of the system which are used to compute the control action to take. The second is an indirect method inspired by self-tuning regulators where recursive least squares estimation is replaced by a local approximator. The third method combines the linearization provided by the local learning techniques with optimal linear control theory, to control non-linear systems about regimes which are far from the equilibrium points. Simulation examples of identification and control of non-linear systems starting from observed data are given.  相似文献   

12.
International Journal of Control, Automation and Systems - This paper concerns with the problem of designing fault detection (FD) observer for Takagi-Sugeno (T-S) fuzzy systems subject to local...  相似文献   

13.
We consider the location of paper watermarks in documents that present problems such as variable paper thickness, stain and other damage. Earlier work has shown success in exploiting a computational model of backlit image acquisition – here we enhance this approach by incorporating knowledge of surface verso features. Robustly removing recto features using established techniques, we present a registration approach that permits similarly robust removal of verso, leaving only features attributable to watermark, folds, chain lines and inconsistencies of paper manufacture. Experimental results illustrate the success of the approach.  相似文献   

14.
This paper proposes a novel face detection method using local gradient patterns (LGP), in which each bit of the LGP is assigned the value one if the neighboring gradient of a given pixel is greater than the average of eight neighboring gradients, and 0 otherwise. LGP representation is insensitive to global intensity variations like the other representations such as local binary patterns (LBP) and modified census transform (MCT), and to local intensity variations along the edge components. We show that LGP has a higher discriminant power than LBP in both the difference between face histogram and non-face histogram and the detection error based on the face/face distance and face/non-face distance. We also reduce the false positive detection error greatly by accumulating evidences from multi-scale detection results with negligible extra computation time. In experiments using the MIT+CMU and FDDB databases, the proposed LGP-based face detection followed by evidence accumulation method provides a face detection rate that is 5–27% better than those of existing methods, and reduces the number of false positives greatly.  相似文献   

15.
HMM acoustic models are typically trained on a single set of cepstral features extracted over the full bandwidth of mel-spaced filterbank energies. In this paper, multi-resolution sub-band transformations of the log energy spectra are introduced based on the conjecture that additional cues for phonetic discrimination may exist in the local spectral correlates not captured by the full-band analysis. In this approach the discriminative contribution from sub-band features is considered to supplement rather than substitute for full-band features. HMMs trained on concatenated multi-resolution cepstral features are investigated, along with models based on linearly combined independent multi-resolution streams, in which the sub-band and full-band streams represent different resolutions of the same signal. For the stream-based models, discriminative training of the linear combination weights to a minimum classification error criteria is also applied. Both the concatenated feature and the independent stream modelling configurations are demonstrated to outperform traditional full-band cepstra for HMM-based acoustic phonetic modelling on the TIMIT database. Experiments on context-independent modelling achieve a best increase on the core test set from an accuracy of 62.3% for full-band models to a 67.5% accuracy for discriminately weighted multi-resolution sub-band modelling. A triphone accuracy of 73.9% achieved on the core test set improves notably on full-band cepstra and compares well with results previously published on this task.  相似文献   

16.
The paper tackles with local models (LM) for periodical time series (TS) prediction. A novel prediction method is introduced, which achieves high prediction accuracy by extracting relevant data from historical TS for LMs training. According to the proposed method, the period of TS is determined by using autocorrelation function and moving average filter. A segment of relevant historical data is determined for each time step of the TS period. The data for LMs training are selected on the basis of the k-nearest neighbours approach with a new hybrid usefulness-related distance. The proposed definition of hybrid distance takes into account usefulness of data for making predictions at a given time step. During the training procedure, only the most informative lags are taken into account. The number of most informative lags is determined in accordance with the Kraskov's mutual information criteria. The proposed approach enables effective applications of various machine learning (ML) techniques for prediction making in expert and intelligent systems. Effectiveness of this approach was experimentally verified for three popular ML methods: neural network, support vector machine, and adaptive neuro-fuzzy inference system. The complexity of LMs was reduced by TS preprocessing and informative lags selection. Experiments on synthetic and real-world datasets, covering various application areas, confirm that the proposed period aware method can give better prediction accuracy than state-of-the-art global models and LMs. Moreover, the data selection reduces the size of training dataset. Hence, the LMs can be trained in a shorter time.  相似文献   

17.
Adaptive smoothing via contextual and local discontinuities   总被引:4,自引:0,他引:4  
A novel adaptive smoothing approach is proposed for noise removal and feature preservation where two distinct measures are simultaneously adopted to detect discontinuities in an image. Inhomogeneity underlying an image is employed as a multiscale measure to detect contextual discontinuities for feature preservation and control of the smoothing speed, while local spatial gradient is used for detection of variable local discontinuities during smoothing. Unlike previous adaptive smoothing approaches, two discontinuity measures are combined in our algorithm for synergy in preserving nontrivial features, which leads to a constrained anisotropic diffusion process that inhomogeneity offers intrinsic constraints for selective smoothing. Thanks to the use of intrinsic constraints, our smoothing scheme is insensitive to termination times and the resultant images in a wide range of iterations are applicable to achieve nearly identical results for various early vision tasks. Our algorithm is formally analyzed and related to anisotropic diffusion. Comparative results indicate that our algorithm yields favorable smoothing results, and its application in extraction of hydrographic objects demonstrates its usefulness as a tool for early vision.  相似文献   

18.
The method of fuzzy-model-based control has emerged as an alternative approach to the solution of analysis and synthesis problems associated with plants that exhibit complex non-linear behaviour. At present, the literature in this field has addressed the control design problem related to the stabilization of state-space fuzzy models. In practical situations, however, where perturbations exist in the state-space model, the problem becomes one of robust stabilization that has yet to be posed and solved. The present paper contributes in this direction through the development of a framework that exploits the distinctive property of the fuzzy model as the convex hull of linear system matrices. Using such a quasi-linear model structure, the robust stabilization of complex non-linear systems, against modelling error and parametric uncertainty, based on static state or dynamic output feedback, is reduced to a linear matrix inequality (LMI) problem.  相似文献   

19.
The technique of multivariate discount weighted regression is used for forecasting multivariate time series. In particular, the discount regression model is modified to cater for the popular local level model for predicting vector time series. The proposed methodology is illustrated with London metal exchange data consisting of aluminium spot and future contract closing prices. The estimate of the measurement noise covariance matrix suggests that these data exhibit high cross-correlation, which is discussed in some detail. The performance of the proposed model is evaluated via an error analysis based on the mean of squared forecast errors, the mean of absolute forecast errors and the mean of absolute percentage forecast errors. A sensitivity analysis shows that a low discount factor should be used and practical guidelines are given for general future use.  相似文献   

20.
Today, people have only limited, valuable spare time at their hands which they want to fill in as good as possible according to their interests. At the same time, cultural institutions are trying to attract interested communities to their carefully planned cultural programs. To distribute these cultural events to the right people, we developed a framework that will aggregate, enrich, recommend and distribute these events as targeted as possible. The aggregated events are published as Linked Open Data using an RDF/OWL representation of the EventsML-G2 standard. These event items are categorised and enriched via smart indexing and linked open datasets available on the Web of data. For recommending the events to the end-user, a global profile of the end-user is automatically constructed by aggregating his profile information from all user communities the user trusts and is registered to. This way, the recommendations take profile information into account from different communities, which has a detrimental effect on the recommendations. As such, the ultimate goal is to provide an open, user-friendly recommendation platform that harnesses the end-user with a tool to access useful event information that goes beyond basic information retrieval. At the same time, we provide the (inter)national cultural community with standardised mechanisms to describe/distribute event and profile information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号