Computational models, such as simulations, are central to a wide range of fields in science and industry. Those models take input parameters and produce some output. To fully exploit their utility, relations between parameters and outputs must be understood. These include, for example, which parameter setting produces the best result (optimization) or which ranges of parameter settings produce a wide variety of results (sensitivity). Such tasks are often difficult to achieve for various reasons, for example, the size of the parameter space, and supported with visual analytics. In this paper, we survey visual parameter space exploration (VPSE) systems involving spatial and temporal data. We focus on interactive visualizations and user interfaces. Through thematic analysis of the surveyed papers, we identify common workflow steps and approaches to support them. We also identify topics for future work that will help enable VPSE on a greater variety of computational models. 相似文献
Urban environments, university campuses, and public and private buildings often present architectural barriers that prevent people with disabilities and special needs to move freely and independently. This paper presents a systematic mapping study of the scientific literature proposing devices, and software applications aimed at fostering accessible wayfinding and navigation in indoor and outdoor environments. We selected 111 out of 806 papers published in the period 2009–2020, and we analyzed them according to different dimensions: at first, we surveyed which solutions have been proposed to address the considered problem; then, we analyzed the selected papers according to five dimensions: context of use, target users, hardware/software technologies, type of data sources, and user role in system design and evaluation. Our findings highlight trends and gaps related to these dimensions. The paper finally presents a reflection on challenges and open issues that must be taken into consideration for the design of future accessible places and of related technologies and applications aimed at facilitating wayfinding and navigation.
In this paper a hybrid control scheme is devised in order to regulate traffic conditions in freeway systems. The considered
control actions are ramp metering, i.e. using traffic lights at the on-ramps in order to regulate incoming traffic, and variable
speed limits to be displayed on on-road variable message signs. The proposed scheme is composed of two levels: the lower level
is characterized by different Model Predictive Control regulators, whereas at the higher level the different control actions
are chosen according to a discrete-event dynamics. The overall scheme is then represented with the formalism of discrete-time
discrete-event automata. More in detail, at the lower level, the prediction model used in the Model Predictive Control schemes
is the first-order dynamical model of traffic flow in which we approximate the steady-state speed-density characteristic as
a piecewise constant function. This approximation is motivated by the fact that we need a simpler finite-horizon problem to
be solved on line, that in this case becomes a Mixed-Integer Linear programming problem. Depending on the system operating
conditions, different regulators are determined by means of suitable Model Predictive Control schemes. The higher level of
the control scheme has the function of identifying the present operating conditions and then switching to the suitable control
action. The reported numerical results show the effectiveness of the proposed hybrid control framework. 相似文献
The aim of this paper is to draw a few guidelines for the evaluation of the accessibility and usability of educational software
programs from the point of view of low vision students. The presented findings are based on the results of a long term research
project carried out by the Italian National Research Council’s Institute for Educational Technology (ITD-CNR) and the David
Chiossone Institute for the Blind, both based in Genoa, Italy. The educational project, whose general aims and results are
not a matter of discussion here, involves a significant number of visually impaired students from primary to upper secondary
school; in such a context, the researchers have the opportunity to assess and evaluate whether, and to what extent, the selected
educational software products meet the needs of low vision students. In this perspective, the paper takes into account the
features which can be considered significant from an educational point of view: general readability, working field extension
and position, menu location and coherence, character dimension, colour brightness, etc. Bearing in mind the ultimate goal
of providing children with appropriate, effective educational tools, an educational software accessibility checklist is proposed
which is meant to be used by teachers with no, or scarce, experience of low vision, and not by professionals; it has already
proved to be an effective tool for helping teachers select suitable educational software products “usable” by low vision students.
The mutation score is an important measure to evaluate the quality of the test cases. It is obtained by executing a lot of mutant programs generated by a set of operators. A common problem, however, is that some operators can generate unnecessary and redundant mutants. Because of this, different strategies were proposed to find a set of operators that generates a reduced number of mutants without decreasing the mutation score. However, the operator selection, in practice, may include real constraints and is dependent on diverse factors besides the number of mutants and score, such as: number of test data, execution time, number of revealed faults, number of equivalent mutants, etc. In fact this is a multi-objective problem, which does not have a single solution. Different set of operators exist for multiple objectives to be satisfied, and some restrictions can be used to choose among the existing sets. To make this choice possible, in this paper, we introduce a multi-objective strategy. We investigate three multi-objective algorithms and introduce a procedure to establish a set of operators to prioritize mutation score. Better results are obtained in comparison with traditional strategies. 相似文献
International Journal on Document Analysis and Recognition (IJDAR) - Handwritten Text Recognition (HTR) in free-layout pages is a challenging image understanding task that can provide a relevant... 相似文献
Autoimmune-rheumatological diseases are worldwide distributed disorders and represent a complex array of illnesses characterized by autoreactivity (reactivity against self-antigens) of T-B lymphocytes and by the synthesis of autoantibodies crucial for diagnosis (biomarkers). Yet, the effects of the autoimmune chronic inflammation on the infiltrated tissues and organs generally lead to profound tissue and organ damage with loss of function (i.e., lung, kidney, joints, exocrine glands). Although progresses have been made on the knowledge of these disorders, much still remains to be investigated on their pathogenesis and identification of new biomarkers useful in clinical practice. The rationale of using proteomics in autoimmune-rheumatological diseases has been the unmet need to collect, from biological fluids that are easily obtainable, a summary of the final biochemical events that represent the effects of the interplay between immune cells, mesenchymal cells and endothelial cells. Proteomic analysis of these fluids shows encouraging results and in this review, we addressed four major autoimmune-rheumatological diseases investigated through proteomic techniques and provide evidence-based data on the highlights obtained in systemic sclerosis, primary and secondary Sjogren's syndrome, systemic lupus erythematosus and rheumatoid arthritis. 相似文献
Web development is moving towards model-driven processes whose goal is the development of Web applications at a higher level of abstraction based on models and model transformations. This brings new opportunities to the Web project manager to make early estimates of the size and the effort required to produce Web applications based on their conceptual models. In the last few years, several studies for size and effort estimation have been performed. However, there are no studies regarding effort estimation in model-driven Web development. In this paper, we present the validation of a model-based size measure (OO-HFP) for Web effort estimation in the context of a model-driven Web development method. The validation is performed by comparing the prediction accuracy that OO-HFP provides with the accuracy provided by the standard function point analysis (FPA) method. The results of the study (using industrial data gathered from 31 Web projects) show that the effort estimates obtained for projects that are sized using OO-HFP are more accurate than the effort estimates obtained using the standard FPA method. This suggests that by following a model-driven development approach, the size measure obtained at the conceptual model of a Web application can be considered a suitable predictor of effort. 相似文献
ContextInheritance is the cornerstone of object-oriented development, supporting conceptual modeling, subtype polymorphism and software reuse. But inheritance can be used in subtle ways that make complex systems hard to understand and extend, due to the presence of implicit dependencies in the inheritance hierarchy.ObjectiveAlthough these dependencies often specify well-known schemas (i.e., recurrent design or coding patterns, such as hook and template methods), new unanticipated dependency schemas arise in practice, and can consequently be hard to recognize and detect. Thus, a developer making changes or extensions to an object-oriented system needs to understand these implicit contracts defined by the dependencies between a class and its subclasses, or risk that seemingly innocuous changes break them.MethodTo tackle this problem, we have developed an approach based on Formal Concept Analysis. Our Formal Concept Analysis based-Reverse Engineering methodology (FoCARE) identifies undocumented hierarchical dependencies in a hierarchy by taking into account the existing structure and behavior of classes and subclasses.ResultsWe validate our approach by applying it to a large and non-trivial case study, yielding a catalog of hierarchy schemas, each one composed of a set of dependencies over methods and attributes in a class hierarchy. We show how the discovered dependency schemas can be used not only to identify good design practices, but also to expose bad smells in design, thereby helping developers in initial reengineering phases to develop a first mental model of a system. Although some of the identified schemas are already documented in existing literature, with our approach based on Formal Concept Analysis (FCA), we are also able to identify previously unidentified schemas.ConclusionsFCA is an effective tool because it is an ideal classification mining tool to identify commonalities between software artifacts, and usually these commonalities reveal known and unknown characteristics of the software artifacts. We also show that once a catalog of useful schemas stabilizes after several runs of FoCARE, the added cost of FCA is no longer needed. 相似文献