Saliency prediction models provide a probabilistic map of relative likelihood of an image or video region to attract the attention of the human visual system. Over the past decade, many computational saliency prediction models have been proposed for 2D images and videos. Considering that the human visual system has evolved in a natural 3D environment, it is only natural to want to design visual attention models for 3D content. Existing monocular saliency models are not able to accurately predict the attentive regions when applied to 3D image/video content, as they do not incorporate depth information. This paper explores stereoscopic video saliency prediction by exploiting both low-level attributes such as brightness, color, texture, orientation, motion, and depth, as well as high-level cues such as face, person, vehicle, animal, text, and horizon. Our model starts with a rough segmentation and quantifies several intuitive observations such as the effects of visual discomfort level, depth abruptness, motion acceleration, elements of surprise, size and compactness of the salient regions, and emphasizing only a few salient objects in a scene. A new fovea-based model of spatial distance between the image regions is adopted for considering local and global feature calculations. To efficiently fuse the conspicuity maps generated by our method to one single saliency map that is highly correlated with the eye-fixation data, a random forest based algorithm is utilized. The performance of the proposed saliency model is evaluated against the results of an eye-tracking experiment, which involved 24 subjects and an in-house database of 61 captured stereoscopic videos. Our stereo video database as well as the eye-tracking data are publicly available along with this paper. Experiment results show that the proposed saliency prediction method achieves competitive performance compared to the state-of-the-art approaches.
This work presents a new algorithm for solving the explicit/multi-parametric model predictive control (or mp-MPC) problem for linear, time-invariant discrete-time systems, based on dynamic programming and multi-parametric programming techniques. The algorithm features two key steps: (i) a dynamic programming step, in which the mp-MPC problem is decomposed into a set of smaller subproblems in which only the current control, state variables, and constraints are considered, and (ii) a multi-parametric programming step, in which each subproblem is solved as a convex multi-parametric programming problem, to derive the control variables as an explicit function of the states. The key feature of the proposed method is that it overcomes potential limitations of previous methods for solving multi-parametric programming problems with dynamic programming, such as the need for global optimization for each subproblem of the dynamic programming step. 相似文献
Extraction-Transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. Literature and personal experience have guided us to conclude that the problems concerning the ETL tools are primarily problems of complexity, usability and price. To deal with these problems we provide a uniform metamodel for ETL processes, covering the aspects of data warehouse architecture, activity modeling, contingency treatment and quality management. The ETL tool we have developed, namely
, is capable of modeling and executing practical ETL scenarios by providing explicit primitives for the capturing of common tasks.
provides three ways to describe an ETL scenario: a graphical point-and-click front end and two declarative languages: XADL (an XML variant), which is more verbose and easy to read and SADL (an SQL-like language) which has a quite compact syntax and is, thus, easier for authoring. 相似文献
Missing data imputation is an important research topic in data mining. The impact of noise is seldom considered in previous
works while real-world data often contain much noise. In this paper, we systematically investigate the impact of noise on
imputation methods and propose a new imputation approach by introducing the mechanism of Group Method of Data Handling (GMDH)
to deal with incomplete data with noise. The performance of four commonly used imputation methods is compared with ours, called
RIBG (robust imputation based on GMDH), on nine benchmark datasets. The experimental result demonstrates that noise has a
great impact on the effectiveness of imputation techniques and our method RIBG is more robust to noise than the other four
imputation methods used as benchmark. 相似文献
In this paper, a novel methodology for analysis of piecewise linear hybrid systems based on discrete abstractions of the continuous dynamics is presented. An important characteristic of the approach is that the available control inputs are taken into consideration in order to simplify the continuous dynamics. Control specifications such as safety and reachability specifications are formulated in terms of partitions of the state space of the system. The approach provides a convenient general framework not only for analysis, but also for controller synthesis of hybrid systems. The research contributions of this paper impact the areas of analysis, verification, and synthesis of piecewise linear hybrid systems. 相似文献
This article aims to investigate the feasibility of incorporating of an artificial neural network (ANN) as an innovative technique for modelling the pavement structural condition, into pavement management systems. For the development of the ANN, strain assessment criteria are set in order to characterise the structural condition of flexible asphalt pavements with regards to fatigue failure. This initial task is directly followed with the development of an ANN model for the prediction of strains primarily based on in situ field gathered data and not through the usage of synthetic databases. For this purpose, falling weight deflectometer (FWD) measurements were systematically conducted on a highway network, with ground-penetrating radar providing the required pavement thickness data. The FWD data (i.e. deflections) were back-analysed in order to assess strains that would be utilised as output data in the process of developing the ANN model. A paper exercise demonstrates how the developed ANN model combined with the suggested conceptual approach for characterising pavement structural condition with regard to strain assessment could make provisions for pavement management activities, categorising network pavement sections according to the need for maintenance or rehabilitation. Preliminary results indicate that the ANN technique could help assist policy decision makers in deriving optimum strategies for the planning of pavement infrastructure maintenance. 相似文献