首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The question ‘What stimulates teachers to integrate Information and Communication Technologies (ICTs) in their pedagogical practices?’ was addressed in the context of teachers’ usage of digital learning materials (DLMs). We adopted Fishbein’s Integrative Model of Behavior Prediction (IMBP), to investigate the various relationships between distal and proximal variables and intention. Mediation analysis revealed that the proximal variables attitude, subjective norm, and self-efficacy towards DLMs were significant predictors of teachers’ intention to use DLMs. The contribution of subjective norm, however, was modest. Attitude, subjective norm and self-efficacy mediated the effects of the following three distal variables on intention: previous use of DLMs, perceived knowledge and skills to use DLMs, and colleagues’ usage of DLMs. Persuasive communication and skills based training seem, therefore, appropriate interventions to promote a positive attitude towards DLMs and improve self-efficacy in using DLMs.  相似文献   

2.
Transparent models search for a balance between interpretability and accuracy. This paper is about the estimation of transparent models of chaotic systems from data, which are accurate and simple enough for their expression to be understandable by a human expert. The models we propose are discrete, built upon common blocks in control engineering (gain, delay, sum, etc.) and optimized both in their complexity and accuracy.The accuracy of a discrete model can be measured by means of the average error between its prediction for the next sampling period and the true output at that time, or ‘one-step error’. A perfect model has zero one-step error, but a small error is not always associated with an approximate model, especially in chaotic systems. In chaos, an arbitrarily low difference between two initial states will produce uncorrelated trajectories, thus a model with a low one-step error may be very different from the desired one. Even though a recursive evaluation (multi-step prediction) improves the fitting, in this work we will show that a learning algorithm may not converge to an appropriate model, unless we include some terms that depend on estimates of certain properties of the model (so called ‘invariants’ of the chaotic series). We will show this graphically, by means of the reconstructed attractors of the original system and the model. Therefore, we also propose to follow a multi-objective approach to model chaotic processes and to apply a simulated annealing-based optimization to obtain transparent models.  相似文献   

3.
We propose two approximate dynamic programming (ADP)-based strategies for control of nonlinear processes using input-output data. In the first strategy, which we term ‘J-learning,’ one builds an empirical nonlinear model using closed-loop test data and performs dynamic programming with it to derive an improved control policy. In the second strategy, called ‘Q-learning,’ one tries to learn an improved control policy in a model-less manner. Compared to the conventional model predictive control approach, the new approach offers some practical advantages in using nonlinear empirical models for process control. Besides the potential reduction in the on-line computational burden, it offers a convenient way to control the degree of model extrapolation in the calculation of optimal control moves. One major difficulty associated with using an empirical model within the multi-step predictive control setting is that the model can be excessively extrapolated into regions of the state space where identification data were scarce or nonexistent, leading to performances far worse than predicted by the model. Within the proposed ADP-based strategies, this problem is handled by imposing a penalty term designed on the basis of local data distribution. A CSTR example is provided to illustrate the proposed approaches.  相似文献   

4.
We formulate the time-constrained backpacker problem as an extension of the classical knapsack problem (KP), where a ‘backpacker’ travels from a origin to a destination on a directed acyclic graph, and collects items en route within the capacity of his knapsack and within a fixed time limit. We present a dynamic programming (DP) algorithm to solve this problem to optimality, and a ‘shift-and-merge’ DP algorithm to solve larger instances. The latter is an extension of the list-type DP, which has been successful for one-dimensional KPs, to the two-dimensional case. Computational experiments on a series of instances demonstrate advantage of the shift-and-merge technique over commercial MIP solvers.  相似文献   

5.
This paper is concerned with expounding a new representation paradigm for modeling expert systems based on computing Groebner Bases. Previous research on Groebner Bases expert systems has so far been connected to modeling expert systems based on propositional logics. Our approach instead is based on the well-known Artificial Intelligence ‘Concept-Attribute-Value’ paradigm for representing knowledge. More precisely, our research is based on translating an already existent expert system described in terms of the ‘Concept-Attribute-Value’ paradigm to a new algebraic model which represents knowledge by means of polynomials. In this way, issues about consistence and inference within this expert system will be, through this new model, transformed into algebraic problems involving calculating Groebner Bases. By using this new model of ours, some interesting advantages ensue: on the one hand, knowledge representation may be performed in a more straightforward and intuitive way; on the other, calculating the Groebner Bases associated to our algebraic model is usually faster adopting this new ‘Concept-Attribute-Value’-based paradigm than it was in previous propositional logic-based expert systems.  相似文献   

6.
The electrical signals are obtained in spinal dorsal root after different manipulations of acupuncture (MA) being taken at the ‘Zusanli’ point of the experiment rats. After combining the analysis of the data generated from neuronal network model and that evoked by acupuncture, it is found that features of neuronal chaotic rate time series induced by periodic stimuli can be characterized by complex network approach. The features of signals evoked by MA ‘nb’ ‘nx’ (twisting) and MA ‘tb’ ‘tx’ (lifting and thrusting) are shown to be different according to the topologies of the mapped networks. This study provides us a new perspective on the analysis of acupuncture and may give potential helps on clinical treatment.  相似文献   

7.
A part is primarily characterized by its ‘function(s)’. The function of the part is achieved through its ‘interactions’ with other parts in an assembly under a set of operating conditions. These ‘interactions’ constitute the ‘behavior’ of the part. The ‘part behavior’ is achieved through a set of spatial and design functional relationships between the interacting surfaces of the parts. The set of spatial and design functional relationships for a part constitutes the Part Function Model (PFM) of that part. In this paper, the nature and role of part ‘functions’ and ‘behaviors’ have been studied in the context of a product design system. The paper addresses the following issues: (i) relationship between function, behavior, and geometry of a part; (ii) importance of ‘part behavior’ over ‘part function’, and development of a ‘part behavior’ model; (iii) methodology for transforming the part behaviors into the PFM model of the part; (iv) product model framework for storing the PFM model information with an Object Oriented Programming (OOP) based CAD system (Concentra's Concept Modeller); (v) importance of the PFM model within the product development process; and (vi) application of the PFM model for generating various product specifications of the part. The prototype implementation of a Functional Design System for transforming part behaviors into different types of part specifications has also been presented in this paper.  相似文献   

8.
Computation has quickly become of paramount importance in the design of engineered systems, both to support their features as well as their design. Tool support for high-level modeling formalisms has endowed design specifications with executable semantics. Such specifications typically include not only discrete-time and discrete-event behavior, but also continuous-time behavior that is stiff from a numerical integration perspective. The resulting stiff hybrid dynamic systems necessitate variable-step solvers to simulate the continuous-time behavior as well as solver algorithms for the simulation of discrete-time and discrete-event behavior. The combined solvers rely on complex computer code which makes it difficult to directly solve design tasks with the executable specifications. To further leverage the executable specifications in design, this work aims to formalize the semantics of stiff hybrid dynamic systems at a declarative level by removing implementation detail and only retaining ‘what’ the computer code does and not ‘how’ it does it. A stream-based approach is adopted to formalize variable-step solver semantics and to establish a computational model of time that supports discrete-time and discrete-event behavior. The corresponding declarative formalization is amenable to computational methods and it is shown how model checking can automatically generate, or synthesize, a feedforward control strategy for a stiff hybrid dynamic system. Specifically, a stamper in a surface mount device is controlled to maintain a low acceleration of the stamped component for a prescribed minimum duration of time.  相似文献   

9.
Orderings and inference relations can be successfully used to model the behavior of a rational agent. This behavior is indeed represented either by a set of ordered pairs that reflect the agent's preferences, or by a rational inference relation that describes the agent's internal logics. In the finite case where we work, both structures admit a simple representation by means of logical chains. The problem of revising such inference processes arises when it appears necessary to modify the original model in order to take into account new facts about the agent's behavior. How is it then possible to perform the desired modification? We study here the possibilities offered by the technique of ‘chain revision’ which appears to be the easiest way to treat this kind of problem: the revision is performed through a simple modification of the logical chain attached to the agent's behavior, and the revision problem boils down to adding, retracting or modifying some of the links of the original chain. This perspective permits an effective treatment of the problems of both simple and multiple revision. The technique developed can also be used in some limiting cases, when the agent's inference process is only partially known, encoded by an incomplete set of preferences or a conditional knowledge base.  相似文献   

10.
陈亚瑞 《计算机科学》2013,40(2):253-256,288
图模型概率推理的主要任务是通过对联合概率分布进行变量求和来计算配分函数、变量边缘概率分布、条件 概率分布等。图模型概率推理计算复杂性及近似概率推理的计算复杂性是一重要的理论问题,也是设计概率推理算 法和近似概率推理算法的理论基础。研究了Ising图模型概率推理的计算复杂性,包括概率推理的难解性及不可近似 性。具体地,通过构建#2 SA"I'问题到Icing图模型概率推理问题的多项式时间计数归约,证明在一般 Ising图模型上 计算配分函数、变量边缘概率分布、条件概率分布的概率推理问题是#P难的,同时证明Icing图模型近似概率推理问 题是NP难的,即一般Icing图模型上的概率推理问题是难解且不可近似的。  相似文献   

11.
We consider bicriteria optimization problems and investigate the relationship between two standard approaches to solving them: (i) computing the Pareto curve and (ii) the so-called decision maker’s approach in which both criteria are combined into a single (usually nonlinear) objective function. Previous work by Papadimitriou and Yannakakis showed how to efficiently approximate the Pareto curve for problems like Shortest Path, Spanning Tree, and Perfect Matching. We wish to determine for which classes of combined objective functions the approximate Pareto curve also yields an approximate solution to the decision maker’s problem. We show that an FPTAS for the Pareto curve also gives an FPTAS for the decision-maker’s problem if the combined objective function is growth bounded like a quasi-polynomial function. If the objective function, however, shows exponential growth then the decision-maker’s problem is NP-hard to approximate within any polynomial factor. In order to bypass these limitations of approximate decision making, we turn our attention to Pareto curves in the probabilistic framework of smoothed analysis. We show that in a smoothed model, we can efficiently generate the (complete and exact) Pareto curve with a small failure probability if there exists an algorithm for generating the Pareto curve whose worst-case running time is pseudopolynomial. This way, we can solve the decision-maker’s problem w.r.t. any non-decreasing objective function for randomly perturbed instances of, e.g. Shortest Path, Spanning Tree, and Perfect Matching.  相似文献   

12.
The purpose of this communication is to show that the program ‘ARGON.f90’ can be simply extended to model ionization from the excited states of atoms where the active electron has a principal quantum number less than or equal to 3. This fact is illustrated by considering a relatively simple collision involving a proton and a neutral hydrogen atom with principal quantum number n=2.  相似文献   

13.
In reverse engineering CAD modeling, a facet model is usually constructed from a large point cloud data which are obtained from a surface scanning process. The number of points in the point cloud data may range from hundred thousands to several millions depending on the user-defined precision. As a result, the facet model becomes very ‘large’ in terms of number of facets or vertices. The computational effort required to manipulate such a large set of data becomes enormous. This effort is significant even for some simple operations, e.g. rotating, scaling and translation. In this paper, an algorithm is proposed to determine the extreme points in a large 3D point set along multiple directions. This algorithm uses a cylindrical grid approximation technique to give both approximate solution and exact solution. This algorithm can be used to accelerate the computational process of some geometric problems on a large model, e.g., the minimum bounding box of a facet model [Comput Aid Des 20 (1988) 506; Comput Struct 79I (2001) 1433; Int J Comput Inform Sci 14 (1985) 183] and the ‘fitness’ problem of a model into a bounded volume [Comput Aid Des 20 (1988) 506].  相似文献   

14.
A micro genetic algorithm with variable population size (VPμGA) is proposed for the spatial analysis for the functional region of suburan-rural area, in which, the fitness function is implied by a functional region affecting index (Θ) with a ‘law-of-gravity’ interpretation. The VPμGA evaluates the Θ represented dynamical behaviours over a ‘short’ to ‘long’ term period, which also revisits the urbanisation of Beijing and examines the Θ sensitivity to the functional distance of 13 suburban-rural districts. Numerical results with given statistics has been obtained using a specially devised simulation toolkit, it is shown that the VPμGA method can work valuably as a tool for providing a functional distanced based estimation of the inter-relationships between the enterprises number, the regional profit, the local population, the regional employment, etc., and to use this understanding to evaluate suburban-rural districts that are more resilient and adaptable.  相似文献   

15.
Both cellular automata (CA) and lattice-gas automata (LG) provide finite algorithmic presentations for certain classes of infinite dynamical systems studied by symbolic dynamics; it is customary to use the terms ‘cellular automaton’ and ‘lattice gas’ for a dynamic system itself as well as for its presentation. The two kinds of presentation share many traits but also display profound differences on issues ranging from decidability to modeling convenience and physical implementability.  相似文献   

16.
This paper investigates a ‘grand’ educational technology innovation through theoretical lenses inspired by 10 and 11 work. Through taking this approach it is possible to show how ideas about the form of the innovation and perceptions of its ultimate ‘success’ or ‘failure’, varied between stakeholder groups. The project was pedagogically effective and popular with students, but was difficult to ‘sell’ to academics, had no senior management sponsor, and was unable to bring about cultural change in the institution despite the capital funding designed to do just that. Although many pedagogical lessons were learned, and have since been applied in other learning spaces around the host campus and elsewhere, these successful disseminations of changed practice were not in accordance with the objectives of key stakeholders. Therefore, they went unconsidered when decisions were taken about the project’s sustainability: hence the notion of ‘invisible success’. The project’s ‘failure’ is only apparent when viewed from certain perspectives; nevertheless, these perspectives are those of the powerful (or in 8 and 9 terms, those possessing capital and academic power) and are the consequence of deeply-rooted structural features in HE, which include funding models, risk-averseness, and fragmented responsibilities.  相似文献   

17.
Sales prediction is an essential part of stock planning for the wholesales and retail business. It is a complex task because of the large number of factors affecting the demand. Designing an intelligent predictor that would beat a simple moving average baseline across a number of products appears to be a non-trivial task. We present an intelligent two level sales prediction approach that switches the predictors depending on the properties of the historical sales. First, we learn how to categorize the sales time series into ‘predictable’ and ‘random’ based on structural, shape and relational features related to the products and the environment using meta learning approach. We introduce a set of novel meta features to capture behavior, shape and relational properties of the sales time series. Next, for the products identified as ‘predictable’ we apply an intelligent base predictor, while for ‘random’ we use a moving average. Using the real data from a food wholesales company we show how the prediction accuracy can be improved using this strategy, as compared to the baseline predictor as well as an ensemble of predictors. In our study we also show that by applying an intelligent predictor for the most ‘predictable’ products we can control the risk of performing worse than the baseline.  相似文献   

18.
In this study, teachers' acceptance and use of an educational portal is assessed based on data from two sources: usage data (number of logins, downloads, uploads, reactions and pages viewed) and an online acceptance questionnaire. The usage data is extracted on two occasions from the portal's database: at survey completion (T1) and twenty-two months later (T2). Framework for this study is C-TAM-TPB (Combined Technology Acceptance Model and Theory of Planned Behavior). 919 usable responses from teachers are obtained. Based on the observed use data at T1, four types of portal users are distinguished: ‘new’ (N = 37), ‘light’ (N = 641), ‘medium’ (N = 201), and ‘heavy’ (N = 40). Path analyses show that all predictor variables in C-TAM-TPB influence teachers' portal acceptance, but their significance level varies depending on the user type. The strongest predictors of behavioral intention to use the portal are attitude (‘new’) and perceived usefulness (‘light’, ‘medium’ and ‘heavy’), with variance explained ranging from .39 (‘medium’) to .71 (‘heavy’). The observed use data show that the portal is primarily used to search for and download material, rather than for sharing material or information. The use data at T2 show that teachers become more efficient in their search behavior and that the majority of the teachers use the portal more frequently. Guidelines are proposed to policymakers and school boards aiming to introduce a similar technology to teachers.  相似文献   

19.
In advancing discrete-based computational cancer models towards clinical applications, one faces the dilemma of how to deal with an ever growing amount of biomedical data that ought to be incorporated eventually in one form or another. Model scalability becomes of paramount interest. In an effort to start addressing this critical issue, here, we present a novel multi-scale and multi-resolution agent-based in silico glioma model. While ‘multi-scale’ refers to employing an epidermal growth factor receptor (EGFR)-driven molecular network to process cellular phenotypic decisions within the micro-macroscopic environment, ‘multi-resolution’ is achieved through algorithms that classify cells to either active or inactive spatial clusters, which determine the resolution they are simulated at. The aim is to assign computational resources where and when they matter most for maintaining or improving the predictive power of the algorithm, onto specific tumor areas and at particular times. Using a previously described 2D brain tumor model, we have developed four different computational methods for achieving the multi-resolution scheme, three of which are designed to dynamically train on the high-resolution simulation that serves as control. To quantify the algorithms’ performance, we rank them by weighing the distinct computational time savings of the simulation runs vs. the methods’ ability to accurately reproduce the high-resolution results of the control. Finally, to demonstrate the flexibility of the underlying concept, we show the added value of combining the two highest-ranked methods. The main finding of this work is that by pursuing a multi-resolution approach, one can reduce the computation time of a discrete-based model substantially while still maintaining a comparably high predictive power. This hints at even more computational savings in the more realistic 3D setting over time, and thus appears to outline a possible path to achieve scalability for the all-important clinical translation.  相似文献   

20.
The present study investigates the effects of relative positions of stereoscopic objects (PSO) (‘Far’ (from viewers) vs. Near’ (to viewers) vs. Both’) and seat location on viewers’ psychological responses. People who watched a movie with Both’ conditions reported more arousal and satisfaction compared to people who watched a movie with either the ‘Far’ or ‘Near’ condition. More importantly, interaction effects were reported such as more dizziness (a) with the ‘Near’ condition if sitting on the left or right side in the cinema and (b) with the ‘Far’ condition if sitting in the middle of the cinema. Additionally, people who wore glasses felt more eye fatigue than those who did not. Secondly, people felt less presence, sensation of depth, and arousal if they had prior experience viewing stereoscopic movies. The results indicate that viewing experience with different PSOs and/or seat locations can influence psychological response.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号