首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of this study was to integrate asynchronous learning technology with teaching strategies on observation and writing into a teacher education method course. The research questions were to explore the effects of the innovative teaching method and to compare it with the traditional teaching method. There were 134 preservice teachers involved in this study. This study used a mixed method design, incorporating both quantitative and qualitative techniques. The main data included questionnaires, observation reports, and on-line information. According to the findings, there were significant differences in the “teaching and learning interaction” and “application of technology and theories” (F = 9.728, P < 0.01, and F = 16.88, P < 0.001, respectively.), but there were no significant differences in the other aspects. The results also showed the experimental teaching method combined the effect of both traditional classroom and online teaching, and reinforced the integration of teaching theories and practices. The preservice teachers reflected that they had learned how to integrate technologies with teaching through the learning environment of the asynchronous learning network and teaching observation. The interactive teaching and learning of this study could supplement the any existing deficiencies in traditional teaching. Therefore, the experimental teaching method is not only a way to construct knowledge, theories and experiences of teaching, but also a good strategy to promote the utilization of instructional technology within teaching for preservice teachers. The limitations in the asynchronous learning technology and the difficulties associated with the preservice teachers’ learning processes were also discussed.  相似文献   

2.
Most methods for mining association rules from tabular data mine simple rules which only use the equality operator “=” in their items. For quantitative attributes, approaches tend to discretize domain values by partitioning them into intervals. Limiting the operator only to “=” results in many interesting frequent patterns that may not be identified. It is obvious that where there is an order between objects, operators such as greater than or less than a given value are as important as the equality operator. This motivates us to extend association rules, from the simple equality operator, to a more general set of operators. We address the problem of mining general association rules in tabular data where rules can have all operators {?, >, ≠, =} in their antecedent part. The proposed algorithm, mining general rules (MGR), is applicable to datasets with discrete-ordered attributes and on quantitative discretized attributes. The proposed algorithm stores candidate general itemsets in a tree structure in such a way that supports of complex itemsets can be recursively computed from supports of simpler itemsets. The algorithm is shown to have benefits in terms of time complexity, memory management and has good potential for parallelization.  相似文献   

3.
Previous studies have sought insights into how websites can effectively draw sustained attention from internet users. Do different types of information presentations on webpages have different influences on users’ perceptions of the information? More precisely, can combinations of an ever greater number of advertising elements on individual websites increase consumers’ purchase intentions? The aim of this study is to explore changes in web advertising’s verbal and visual stimulation of surfers’ cognitive process, and to provide valuable information for the successful matching of advertising elements to one another. We examine optimal website design according to the personality-trait theory and resource-matching theory. Study 1 addresses the effects that combinations of various types of online advertising can have on web design factor, and to this end, we use a 2 (visual complexity: 3D advertising with an avatar, 2D advertising) × 2 (verbal complexity: with or without self-referencing that is an advertising practice to express product claims in words) factorial design. Study 2 treats personality traits (i.e., need-for-cognition and sensation seeking) as moderating variables to build the optimal portfolio regarding the “online-advertising effects” hypothesis. Our results suggest that subjects prefer medium-complex advertising comprising “3D advertising elements with an avatar” or “2D advertising elements with self-referencing”: high-sensation seekers and low-need-for-cognition viewers prefer the former, whereas low-sensation seekers and high-need-for-cognition viewers prefer the latter.  相似文献   

4.
In Kingston and Svalbe [1], a generalized finite Radon transform (FRT) that applied to square arrays of arbitrary size N × N was defined and the Fourier slice theorem was established for the FRT. Kingston and Svalbe asserted that “the original definition by Matúš and Flusser was restricted to apply only to square arrays of prime size,” and “Hsung, Lun and Siu developed an FRT that also applied to dyadic square arrays,” and “Kingston further extended this to define an FRT that applies to prime-adic arrays”. It should be said that the presented generalized FRT together with the above FRT definitions repeated the known concept of tensor representation, or tensor transform of images of size N × N which was published earlier by Artyom Grigoryan in 1984-1991 in the USSR. The above mentioned “Fourier slice theorem” repeated the known tensor transform-based algorithm of 2-D DFT [5-11], which was developed for any order N1 × N2 of the transformation, including the cases of N × N, when N = 2r, (r > 1), and N = Lr, (r ≥ 1), where L is an odd prime. The problem of “over-representation” of the two-dimensional discrete Fourier transform in tensor representation was also solved by means of the paired representation in Grigoryan [6-9].  相似文献   

5.
Much work has been devoted, during the past 20 years, to using complexity to protect elections from manipulation and control. Many “complexity shield” results have been obtained—results showing that the attacker’s task can be made NP-hard. Recently there has been much focus on whether such worst-case hardness protections can be bypassed by frequently correct heuristics or by approximations. This paper takes a very different approach: We argue that when electorates follow the canonical political science model of societal preferences the complexity shield never existed in the first place. In particular, we show that for electorates having single-peaked preferences, many existing NP-hardness results on manipulation and control evaporate.  相似文献   

6.
This study empirically investigated the structure and function of maladaptive cognitions related to Pathological Internet Use (PIU) among Chinese adolescents. To explore the structure of maladaptive cognitions, this study validated a Chinese Adolescents’ Maladaptive Cognitions Scale (CAMCS) with two samples of adolescents (n1 = 293 and n2 = 609). The results of the exploratory factor analysis and confirmatory factor analysis revealed that CAMCS included three distinct factors, namely, “social comfort,” “distraction,” and “self-realization.” To examine the function of maladaptive cognitions, this study tested an updated cognitive-behavioral model in the third sample of 1059 adolescents. The results of structural equation model analyses verified both the direct effect of maladaptive cognitions on PIU and their mediating role in the relationships between distal factors (social anxiety and stressful life events) and PIU among Chinese adolescents. Theoretical and practical implications of these findings were discussed.  相似文献   

7.
In this paper, we describe a granular algorithm for translating information between two granular worlds, represented as fuzzy rulebases. These granular worlds are defined on the same universe of discourse, but employ different granulations of this universe. In order to translate information from one granular world to the other, we must regranulate the information so that it matches the information granularity of the target world. This is accomplished through the use of a first-order interpolation algorithm, implemented using linguistic arithmetic, a set of elementary granular computing operations. We first demonstrate this algorithm by studying the common “fuzzy-PD” rulebase at several different granularities, and conclude that the “3 × 3” granulation may be too coarse for this objective. We then examine the question of what the “natural” granularity of a system might be; this is studied through a 10-fold cross-validation experiment involving three different granulations of the same underlying mapping. For the problem under consideration, we find that a 7 × 7 granulation appears to be the minimum necessary precision.  相似文献   

8.
The Coarse Woody Debris (CWD) quantity, defined as biomass per unit area (t/ha), and the quality, defined as the proportion of standing dead logs to the total CWD quantity, greatly contribute to many ecological processes such as forest nutrient cycling, tree regeneration, wildlife habitat, fire dynamics, and carbon dynamics. However, a cost-effective and time-saving method to determine CWD is not available. Very limited literature could be found that applies remote sensing technique to CWD inventory. In this paper, we fused the wall-to-wall multi-frequency and multi-polarization Airborne Synthetic Aperture Radar (AirSAR) and optical Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) to estimate the quantity and quality of CWD in Yellowstone post-fire forest ecosystem, where the severe 1988 fire event resulted in high spatial heterogeneity of dead logs. To relate backscatter values to CWD metrics, we first reduced the terrain effect to remove the interference of topography on AirSAR backscatter. Secondly, we removed the influence of regenerating sapling by quadratic polynomial fitting between AVIRIS Enhanced Vegetation Index (EVI) and different channels backscatters. The quantity of CWD was derived from Phh and Phv, and the quality of CWD was derived from Phh aided by the ratio of Lhv and Phh. Two maps of Yellowstone post-fire CWD quantity and quality were produced. The calculated CWD quantity and quality were validated by extensive field surveys. Regarding CWD quantity, the correlation coefficient between measured and predicted CWD is only 0.54 with mean absolute error up to 29.1 t/ha. However, if the CWD quantity was discretely classified into three categories of “≤ 60”, “60-120”, and “≥ 120”, the overall accuracy is 65.6%; if classified into two categories of “≤ 90” and “≥ 90”, the overall accuracy is 73.1%; if classified into two categories of “≤ 60” and “≥ 60”, the overall accuracy is 84.9%. This indicates our attempt to map CWD quantity spatially and continuously achieved partial success; however, the general and discrete categories are reasonable. Regarding CWD quality, the overall accuracy of 5 types (Type 1—standing CWD ratio ≥ 40%; Type 2—15% ≤ standing CWD ratio < 40%; Type 3—7% ≤ standing CWD ratio< 15%; Type 4—3% ≤ standing CWD ratio < 7%; Type 5—standing CWD ratio < 3%) is only 40.32%. However, when type 1, 2, 3 are combined into one category and type 4 and 5 are combined into one category, the overall accuracy is 67.74%. This indicates the partial success of our initial results to map CWD quality into detailed categories, but the result is acceptable if solely very coarse CWD quality is considered. Bias can be attributed to the complex influence of many factors, such as field survey error, sapling compensation, terrain effect reduction, surface properties, and backscatter mechanism understanding.  相似文献   

9.
We present an automated method for classifying “liking” and “desire to view again” of online video ads based on 3268 facial responses to media collected over the Internet. The results demonstrate the possibility for an ecologically valid, unobtrusive, evaluation of commercial “liking” and “desire to view again”, strong predictors of marketing success, based only on facial responses. The area under the curve for the best “liking” classifier was 0.82 when using a challenging leave-one-commercial-out testing regime (accuracy = 81%). We build on preliminary findings and show that improved smile detection can lead to a reduction in misclassifications. Comparison of the two smile detection algorithms showed that improved smile detection helps correctly classify responses recorded in challenging lighting conditions and those in which the expressions were subtle. Temporal discriminative approaches to classification performed most strongly showing that temporal information about an individual's response is important; it is not just how much a viewer smiles but when they smile. The technique could be employed in personalizing video content that is presented to people while they view videos over the Internet or in copy testing of ads to unobtrusively quantify ad effectiveness.  相似文献   

10.
Visual prostheses based on micro-electronic technologies and biomedical engineering have been demonstrated to restore vision to blind individuals. It is necessary to determine the minimum requirements to achieve useful artificial vision for image recognition. To find the primary factors in common object and scene images recognition and optimize the recognition accuracy on low resolution images using image processing strategies, we investigate the effects of two kinds of image processing methods, two common shapes of pixels (square and circular) and six resolutions (8 × 8, 16 × 16, 24 × 24, 32 × 32, 48 × 48 and 64 × 64). The results showed that the mean recognition accuracy increased with the number of pixels. The recognition threshold for objects was within the interval of 16 × 16 to 24 × 24 pixels. For simple scenes, it was between 32 × 32 and 48 × 48 pixels. Near the threshold of recognition, different image modes had great impact on recognition accuracy. The images with “threshold pixel number and binarization-circular points” produced the best recognition results.  相似文献   

11.
The purpose of this study was to integrate technology and team-teaching techniques into science teacher education method courses in order to explore the effects of such integration on preservice teachers. The participants included one instructor and a total of 42 preservice teachers. A technology team-teaching model (TTT) was designed in this study to restructure science method courses with technology. This study used a mixed-method design, incorporating both quantitative and qualitative techniques. The results revealed that there were significant differences in “designing an appropriate science topic to be taught with technology” and “integrating computer activities with appropriate pedagogy in classroom instruction” (F = 5.260, p < 0.05, and F = 10.260, p < 0.01, respectively). The results also showed that the TTT model could enhance the integration of science teaching theories and practice. Team-teaching technique facilitated the integration of technology in science lesson design and teaching practice, and enhanced friendship through interaction. The TTT model could better the science learning experience of preservice teachers and serve as useful reference for other teacher education institutes.  相似文献   

12.
The Youth Risk Behavior Surveillance System (YRBSS) monitors priority health-risk behaviors among US high school students. To better understand the ramifications of changing the YRBSS from paper-and-pencil to Web administration, in 2008 the Centers for Disease Control and Prevention conducted a study comparing these two modes of administration. Eighty-five schools in 15 states agreed to participate in the study. Within each participating school, four classrooms of students in grades 9 or 10 were randomly assigned to complete the Youth Risk Behavior Survey questionnaire in one of four conditions (in-class paper-and-pencil, in-class Web without programmed skip patterns, in-class Web with programmed skip patterns, and “on your own” Web without programmed skip patterns). Findings included less missing data for the paper-and-pencil condition (1.5% vs. 5.3%, 4.4%, 6.4%; p < .001), less perceived privacy and anonymity among respondents for the in-class Web conditions, and a lower response rate for the “on your own” Web condition than for in-class administration by either mode (28.0% vs. 91.2%, 90.1%, 91.4%; p < .001). Although Web administration might be useful for some surveys, these findings do not favor the use of a Web survey for the YRBSS.  相似文献   

13.
Studies have shown that multitasking with technology, specifically using Social Networking Sites (SNSs), decreases both efficiency and productivity in an academic setting. This study investigates multitasking’s impact on the relationship between SNS use and Grade Point Average (GPA) in United States (US; n = 451) and European (n = 406) university students using quantitative and qualitative data analysis. Moderated Multiple Regression analysis results showed that the negative relationship between SNS use and GPA was moderated by multitasking only in the US sample. This may be due to European students being less prone to “disruptive” multitasking. The results provide valuable cautionary information about the impact of multitasking and using SNSs in a learning environment on university students’ GPAs.  相似文献   

14.
Elections are a central model in a variety of areas. This paper studies parameterized computational complexity of five control problems in the Maximin election. We obtain the following results: constructive control by adding candidates is W[2]-hard with respect to the parameter “number of added candidates”; both constructive and destructive control by adding/deleting voters are W[1]-hard with respect to the parameter “number of added/deleted voters”.  相似文献   

15.
Improved land surface emissivities over agricultural areas using ASTER NDVI   总被引:1,自引:0,他引:1  
Land surface emissivity retrieval over agricultural regions is important for energy balance estimations, land cover assessment and other related environmental studies. The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) produces images of sufficient spatial resolution (from 15 m to 90 m) to be of use in agricultural studies, in which fields of crops are too small to be well-resolved by low resolution sensors. The ASTER project generates land surface emissivity images as a Standard Product (AST05) using the Temperature/Emissivity Separation (TES) algorithm. However, the TES algorithm is prone to scaling errors in estimating emissivities for surfaces with low spectral contrast if the atmospheric correction is inaccurate. This paper shows a comparison between the land surface emissivity estimated with the TES algorithm and from a simple approach using the Normalized Difference Vegetation Index (NDVI) for five ASTER images (28 June 2000, 15 August 2000, 31 August 2000, 28 April 2001 and 02 August 2001) of the agricultural area of Barrax (Albacete, Spain). The results indicate that differences are < 1% for ASTER band 13 (10.7 μm) and < 1.5% for band 14 (11.3 μm), but > 2% for bands 10 (8.3 μm), 11 (8.6 μm) and 12 (9.1 μm). The emissivities for the five ASTER bands were tested against in situ measurements carried out with the CIMEL CE 312-2 field radiometer, the NDVI method giving root mean square errors (RMSE) < 0.005 over vegetated areas and RMSE < 0.015 over bare soil, and the TES algorithm giving RMSE ∼ 0.01 for vegetated areas but RMSE > 0.03 over bare soil. The errors and inconsistencies for ASTER bands 13 and 14 are within those anticipated for TES, but the greater errors for bands 10-12 suggest the presence of problems related to atmospheric compensation and model assumptions about soil spectra. The NDVI method uses visible/near-infrared data co-acquired with the thermal images to estimate vegetation cover and, hence, provides an independent constraint on emissivity. The success of this approach suggests that it may be useful for daytime images of agricultural or other heavily vegetated areas, in which the TES algorithm has occasional failures.  相似文献   

16.
17.
North American forest disturbance mapped from a decadal Landsat record   总被引:8,自引:0,他引:8  
Forest disturbance and recovery are critical ecosystem processes, but the spatial pattern of disturbance has never been mapped across North America. The LEDAPS (Landsat Ecosystem Disturbance Adaptive Processing System) project has assembled a wall-to-wall record of stand-clearing disturbance (clearcut harvest, fire) for the United States and Canada for the period 1990-2000 using the Landsat satellite archive. Landsat TM and ETM+ data were first converted to surface reflectance using the MODIS/6S atmospheric correction approach. Disturbance and early recovery were mapped using the temporal change in a Tasseled-Cap “Disturbance Index” calculated from the early (~ 1990) and later (~ 2000) images. Validation of the continental mapping has been carried out using a sample of biennial Landsat time series from 23 locations across the United States. Although a significant amount of disturbance (30-60%) cannot be mapped due to the long interval between image acquisition dates, the biennial analyses allow a first-order correction of the decadal mapping. Our results indicate disturbance rates of up to 2-3% per year are common across the US and Canada due primarily to harvest and forest fire. Rates are highest in the southeastern US, the Pacific Northwest, Maine, and Quebec. The mean disturbance rate for the conterminous United States (the “lower 48” states and District of Columbia) is calculated as 0.9 +/− 0.2% per year, corresponding to a turnover period of 110 years.  相似文献   

18.
Characterizing forest structure is an important part of any comprehensive biodiversity assessment. However, current methods for measuring structural complexity require a laborious process that involves many logistically expensive point based measurements. An automated or semi-automated method would be ideal. In this study, the utility of airborne laser scanning (LiDAR; Light Detection and Ranging) for characterizing the ecological structure of a forest landscape is examined. The innovation of this paper is to use different laser pulse return properties from a full waveform LiDAR to characterize forest ecological structure. First the LiDAR dataset is stratified into four vertical layers: ground, low vegetation (0-1 m from the ground), medium vegetation (1-5 m from the ground) and high vegetation (> 5 m). Subsequently the “Type” of LiDAR return is analysed: Type 1 (singular returns); Type 2 (first of many returns); Type 3 (intermediate returns); and Type 4 (last of many returns). A forest characterization scheme derived from LiDAR point clouds is proposed. A validation of the scheme is then presented using a network of field sites that recorded commonly used metrics of biodiversity. The proposed forest characterization categories allow for quantification of gaps (above bare ground, low vegetation and medium vegetation), canopy cover and its vertical density as well as the presence of various canopy strata (low, medium and high). Regression analysis showed that LiDAR derived variables were good predictors of field recorded variables (R2 = 0.82, P < 0.05 between LiDAR derived presence of low vegetation and field derived LAI for low vegetation). The proposed scheme clearly shows the potential of full waveform LiDAR to provide information on the complexity of habitat structure.  相似文献   

19.
This essay examines how students of African descent at a predominantly black college on the East Coast digitally perform their ethnic identities and rhetorics in a freshman composition course. The essay begins by showing how multiple uses of signifying frame students’ Blackboard discussions where they use a type of trickster motif to enact their agreements, disagreements, challenges, and questions, very much akin to Flava Flav's initial cultural role as part of the Rap/activist group, Public Enemy. Students’ online writing groups are then examined by focusing on one particular group, the “Black Long Distance Writers,” whose title signifies and signals the work of the African American writer and activist, John Oliver Killens, most notably, his seminal 1973 essay, “Wanted: Some Black Long Distance Runners.” The understandings of these “Black Long Distance Writers” bear the most powerful definition of literacy and computer-based writing instruction because their framework is not contingent upon making digitally divided minorities more technologically advanced and better at one type of English, its culture of power, or its academic discourses. Instead, these students experience rhetoric and writing as a way to alter the ways that knowledge is constructed for them and about them, “revocabularizing” the academy and its technologies. Such freshman writers are re-envisioned in this kind of cyberspace as constructors of and co-participants in black intellectual and rhetorical traditions … now AfroDigitized.  相似文献   

20.
Improvements to a MODIS global terrestrial evapotranspiration algorithm   总被引:43,自引:0,他引:43  
MODIS global evapotranspiration (ET) products by Mu et al. [Mu, Q., Heinsch, F. A., Zhao, M., Running, S. W. (2007). Development of a global evapotranspiration algorithm based on MODIS and global meteorology data. Remote Sensing of Environment, 111, 519-536. doi: 10.1016/j.rse.2007.04.015] are the first regular 1-km2 land surface ET dataset for the 109.03 Million km2 global vegetated land areas at an 8-day interval. In this study, we have further improved the ET algorithm in Mu et al. (2007a, hereafter called old algorithm) by 1) simplifying the calculation of vegetation cover fraction; 2) calculating ET as the sum of daytime and nighttime components; 3) adding soil heat flux calculation; 4) improving estimates of stomatal conductance, aerodynamic resistance and boundary layer resistance; 5) separating dry canopy surface from the wet; and 6) dividing soil surface into saturated wet surface and moist surface. We compared the improved algorithm with the old one both globally and locally at 46 eddy flux towers. The global annual total ET over the vegetated land surface is 62.8 × 103 km3, agrees very well with other reported estimates of 65.5 × 103 km3 over the terrestrial land surface, which is much higher than 45.8 × 103 km3 estimated with the old algorithm. For ET evaluation at eddy flux towers, the improved algorithm reduces mean absolute bias (MAE) of daily ET from 0.39 mm day−1 to 0.33 mm day−1 driven by tower meteorological data, and from 0.40 mm day−1 to 0.31 mm day−1 driven by GMAO data, a global meteorological reanalysis dataset. MAE values by the improved ET algorithm are 24.6% and 24.1% of the ET measured from towers, within the range (10-30%) of the reported uncertainties in ET measurements, implying an enhanced accuracy of the improved algorithm. Compared to the old algorithm, the improved algorithm increases the skill score with tower-driven ET estimates from 0.50 to 0.55, and from 0.46 to 0.53 with GMAO-driven ET. Based on these results, the improved ET algorithm has a better performance in generating global ET data products, providing critical information on global terrestrial water and energy cycles and environmental changes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号