首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
《Ergonomics》2012,55(9):1255-1260
Abstract

The purpose of this study was to investigate how altering surfboard volume (BV) affects energy expenditure during paddling. Twenty surfers paddled in a swim flume on five surfboards in random order twice. All surfboards varied only in thickness and ranged in BV from 28.4 to 37.4 L. Measurements of heart rate (HR), oxygen consumption (VO2), pitch angle, roll angle and paddling cadence were measured. VO2 and HR significantly decreased on thicker boards [VO2: r = ?0.984, p = 0.003; HR: r = ?0.972, p = 0.006]. There was also a significant decrease in pitch and roll angles on thicker boards [Pitch: r = ?0.995, p < 0.001; Roll: r = ?0.911, p = 0.031]. Results from this study suggest that increasing BV reduces the metabolic cost of paddling as a result of lower pitch and roll angles, thus providing mechanical evidence for increased paddling efficiency on surfboards with more volume.

Practioner Summary: This study investigated the impact of surfboard volume on energy expenditure during paddling. Results from this study suggest that increasing surfboard volume reduces the metabolic cost of paddling as a result of lower pitch and roll angles, thus providing mechanical evidence for increased paddling efficiency on surfboards with more volume.  相似文献   

2.
Medication omissions and dosing failures are frequent during transitions in patient care. Medication reconciliation (MR) requires bridging discrepancies in a patient’s medical history as a setting for care changes. MR has been identified as vulnerable to failure, and a clinician’s cognition during MR remains poorly described in the literature. We sought to explore cognition in MR tasks. Specifically, we sought to explore how clinicians make sense of conditions and medications. We observed 24 anesthesia providers performing a card-sorting task to sort conditions and medications for a fictional patient. We analyzed the spatial properties of the data using statistical methods. Most of the participants (58%) arranged the medications along a straight line (p < 0.001). They sorted medications by organ systems (Friedman’s χ 2(54) = 325.7, p < 0.001). These arrangements described the clinical correspondence between each two medications (Wilcoxon W = 192.0, p < 0.001). A cluster analysis showed that the subjects matched conditions and medications related to the same organ system together (Wilcoxon W = 1917.0, p < 0.001). We conclude that the clinicians commonly arranged the information into two groups (conditions and medications) and assigned an internal order within these groups, according to organ systems. They also matched between conditions and medications according to similar criteria. These findings were also supported by verbal protocol analysis. The findings strengthen the argument that organ-based information is pivotal to a clinician’s cognition during MR. Understanding the strategies and heuristics, clinicians employ through the MR process may help to develop practices to promote patient safety.  相似文献   

3.
We propose and analyze threading algorithms for hybrid MPI/OpenMP parallelization of a molecular-dynamics simulation, which are scalable on large multicore clusters. Two data-privatization thread scheduling algorithms via nucleation-growth allocation are introduced: (1) compact-volume allocation scheduling (CVAS); and (2) breadth-first allocation scheduling (BFAS). The algorithms combine fine-grain dynamic load balancing and minimal memory-footprint data privatization threading. We show that the computational costs of CVAS and BFAS are bounded by Θ(n 5/3 p ?2/3) and Θ(n), respectively, for p threads working on n particles on a multicore compute node. Memory consumption per node of both algorithms scales as O(n+n 2/3 p 1/3), but CVAS has smaller prefactors due to a geometric effect. Based on these analyses, we derive the selection criterion between the two algorithms in terms of the granularity, n/p. We observe that memory consumption is reduced by 75 % for p=16 and n=8,192 compared to a naïve data privatization, while maintaining thread imbalance below 5 %. We obtain a strong-scaling speedup of 14.4 with 16-way threading on a four quad-core AMD Opteron node. In addition, our MPI/OpenMP code achieves 2.58× and 2.16× speedups over the MPI-only implementation on 32,768 cores of BlueGene/P for 0.84 and 1.68 million particle systems, respectively.  相似文献   

4.
It is useful to have a disaggregated population database at uniform grid units in disaster situations. This study presents a method for settlement location probability and population density estimations at a 90 m resolution for northern Iraq using the Shuttle Radar Topographic Mission (SRTM) digital terrain model and Landsat Enhanced Thematic Mapper satellite imagery. A spatial model each for calculating the probability of settlement location and for estimating population density is described. A randomly selected subset of field data (equivalent to 50%) is first analysed for statistical links between settlement location probability and population density; and various biophysical features which are extracted from Landsat or SRTM data. The model is calibrated using this subset. Settlement location probability is attributed to the distance from roads and water bodies and land cover. Population density can be estimated based upon land cover and topographic features. The Landsat data are processed using a segmentation and subsequent feature–based classification approach making this method robust to seasonal variations in imagery and therefore applicable to a time series of images regardless of acquisition date. The second half of the field data is used to validate the model. Results show a reasonable estimate of population numbers (r = 0.205, p<0.001) for both rural and urban settlements. Although there is a strong overall correlation between the results of this and the LandScan model (r = 0.464, p<0.001), this method performs better than the 1 km resolution LandScan grid for settlements with fewer than 1000 people, but is less accurate for estimating population numbers in urban areas (LandScan rural r = 0.181, p<0.001; LandScan urban r = 0.303, p<0.001). The correlation between true urban population numbers is superior to that of LandScan however when the 90 m grid values are summed using a filter which corresponds to the LandScan spatial resolution (r = 0.318, p<0.001).  相似文献   

5.
S. Kung  T. Kailath 《Automatica》1980,16(4):399-403
The so-called minimal design problem (or MDP) of linear system theory is to find a proper minimal degree rational matrix solution of the equation H(z)D(z)=N(z), where {N(z),D(z)} are given p×r and m×r polynomial matrices with D(z) of full rank rm.We describe some solution algorithms that appear to be more efficient (in terms of number of computations and of potential numerical stability) than those presently known. The algorithms are based on the structure of a polynomial echelon form of the left minimal basis of the so-called generalized Sylvester resultant matrix of {N(z), D(z)}. Orthogonal projection algorithms that exploit the Toeplitz structure of this resultant matrix are used to reduce the number of computations needed for the solution.  相似文献   

6.
Structural and functional analyses of ecosystems benefit when high accuracy vegetation coverages can be derived over large areas. In this study, we utilize IKONOS, Landsat 7 ETM+, and airborne scanning light detection and ranging (lidar) to quantify coniferous forest and understory grass coverages in a ponderosa pine (Pinus ponderosa) dominated ecosystem in the Black Hills of South Dakota. Linear spectral mixture analyses of IKONOS and ETM+ data were used to isolate spectral endmembers (bare soil, understory grass, and tree/shade) and calculate their subpixel fractional coverages. We then compared these endmember cover estimates to similar cover estimates derived from lidar data and field measures. The IKONOS-derived tree/shade fraction was significantly correlated with the field-measured canopy effective leaf area index (LAIe) (r2=0.55, p<0.001) and with the lidar-derived estimate of tree occurrence (r2=0.79, p<0.001). The enhanced vegetation index (EVI) calculated from IKONOS imagery showed a negative correlation with the field measured tree canopy effective LAI and lidar tree cover response (r2=0.30, r=−0.55 and r2=0.41, r=−0.64, respectively; p<0.001) and further analyses indicate a strong linear relationship between EVI and the IKONOS-derived grass fraction (r2=0.99, p<0.001). We also found that using EVI resulted in better agreement with the subpixel vegetation fractions in this ecosystem than using normalized difference of vegetation index (NDVI). Coarsening the IKONOS data to 30 m resolution imagery revealed a stronger relationship with lidar tree measures (r2=0.77, p<0.001) than at 4 m resolution (r2=0.58, p<0.001). Unmixed tree/shade fractions derived from 30 m resolution ETM+ imagery also showed a significant correlation with the lidar data (r2=0.66, p<0.001). These results demonstrate the power of using high resolution lidar data to validate spectral unmixing results of satellite imagery, and indicate that IKONOS data and Landsat 7 ETM+ data both can serve to make the important distinction between tree/shade coverage and exposed understory grass coverage during peak summertime greenness in a ponderosa pine forest ecosystem.  相似文献   

7.
Andrej Dujella 《Computing》2009,85(1-2):77-83
Wiener’s attack is a well-known polynomial-time attack on a RSA cryptosystem with small secret decryption exponent d, which works if d < n 0.25, where n = pq is the modulus of the cryptosystem. Namely, in that case, d is the denominator of some convergent p m /q m of the continued fraction expansion of e/n, and therefore d can be computed efficiently from the public key (n, e). There are several extensions of Wiener’s attack that allow the RSA cryptosystem to be broken when d is a few bits longer than n 0.25. They all have the run-time complexity (at least) O(D 2), where d = Dn 0.25. Here we propose a new variant of Wiener’s attack, which uses results on Diophantine approximations of the form |α ? p/q| <  c/q 2, and “meet-in-the-middle” variant for testing the candidates (of the form rq m+1sq m ) for the secret exponent. This decreases the run-time complexity of the attack to O(D log D) (with the space complexity O(D)).  相似文献   

8.
Let ${{\mathcal S}}$ be one of the two multiplicative semigroups: M × M Boolean matrices, or the semigroup of M × M matrices over the field GF(2). Then for any matrix ${A\in {\mathcal S}}$ there exist two unique smallest numbers, namely the index and period k, d, such that A k  = A k+d . This fact allows us to form a new statistical test for randomness which we call the Semigroup Matrix Test. In this paper, we present details and results of our experiments for this test. We use Boolean matrices for M = 2, . . . , 5, and matrices over GF(2) of the size M = 2, . . . , 6. We also compare the results with the results obtained by the well-known Binary Matrix Rank Test.  相似文献   

9.
We describe two new parallel algorithms, one conservative and another optimistic, for discrete-event simulation on an exclusive-read exclusive-write parallel random-access machine (EREW PRAM). The target physical systems are bounded degree networks which are represented by logic circuits. Employing p processors, our conservative algorithm can simulate up to O(p) independent messages of a system with n logical processes in O(log n) time. The number of processors, p, can be optimally varied in the range 1 ≤ pn. To identify independent messages, this algorithm also introduces a novel scheme based on a variable size time window. Our optimistic algorithm is designed to reduce the rollback frequency and the memory requirement to save past states and messages. The optimistic algorithm also simulates O(p) earliest messages on a p-processor computer in O(log n) time. To our knowledge, such a theoretical efficiency in parallel simulation algorithms, conservative or optimistic, has been achieved for the first time.  相似文献   

10.
An empirical analysis was performed to compare the effectiveness of different approaches to training a set of procedural skills to a sample of novice trainees. Sixty-five participants were randomly assigned to one of the following three training groups: (1) learning-by-doing in a 3D desktop virtual environment, (2) learning-by-observing a video (show-and-tell) explanation of the procedures, and (3) trial-and-error. In each group, participants were trained on two car service procedures. Participants were recalled to perform a procedure either 2 or 4 weeks after the training. The results showed that: (1) participants trained through the virtual approach of learning-by-doing performed both procedures significantly better (i.e. p < .05 in terms of errors and time) than people of non-virtual groups, (2) the virtual training group, after a period of non-use, were more effective than non-virtual training (i.e. p < .05) in their ability to recover their skills, (3) after a (simulated) long period from the training—i.e. up to 12 weeks—people who experienced 3D environments consistently performed better than people who received other kinds of training. The results also suggested that independently from the training group, trainees’ visuospatial abilities were a predictor of performance, at least for the complex service procedure, adj R 2 = .460, and that post-training performances of people trained through virtual learning-by-doing are not affected by learning styles. Finally, a strong relationship (p < .001, R 2 = .441) was identified between usability and trust in the use of the virtual training tool—i.e. the more the system was perceived as usable, the more it was perceived as trustable to acquire the competences.  相似文献   

11.

Context

Test-driven development is an approach to software development, where automated tests are written before production code in highly iterative cycles. Test-driven development attracts attention as well as followers in professional environment; however empirical evidence of its superiority regarding its effect on productivity, code and tests compared to test-last development is still fairly limited. Moreover, it is not clear if the supposed benefits come from writing tests before code or maybe from high iterativity/short development cycles.

Objective

This paper describes a family of controlled experiments comparing test-driven development to micro iterative test-last development with emphasis on productivity, code properties (external quality and complexity) and tests (code coverage and fault-finding capabilities).

Method

Subjects were randomly assigned to test-driven and test-last groups. Controlled experiments were conducted for two years, in an academic environment and in different developer contexts (pair programming and individual programming contexts). Number of successfully implemented stories, percentage of successful acceptance tests, McCabe’s code complexity, code coverage and mutation score indicator were measured.

Results

Experimental results and their selective meta-analysis show no statistically significant differences between test-driven development and iterative test-last development regarding productivity (χ2(6) = 4.799, p = 1.0, r = .107, 95% CI (confidence interval): −.149 to .349), code complexity (χ2(6) = 8.094, p = .46, r = .048, 95% CI: −.254 to .341), branch coverage (χ2(6) = 13.996, p = .059, r = .182, 95% CI: −.081 to .421), percentage of acceptance tests passed (one experiment, Mann-Whitney = 125.0, p = .98, r = .066) and mutation score indicator (χ2(4) = 3.807, p = .87, r = .128, 95% CI: −.162 to .398).

Conclusion

According to our findings, the benefits of test-driven development compared to iterative test-last development are small and thus in practice relatively unimportant, although effects are positive. There is an indication of test-driven development endorsing better branch coverage, but effect size is considered small.  相似文献   

12.
When using hammer drills, the user is exposed to vibrations which can cause damage to the body. Those vibrations can be affected by external factors such as feed forces, which can increase the degree of damage to the user. However, currently there is a lack of knowledge as to whether the lateral forces applied by the user also have an influence on the technical system and whether these influences depend on the system. For this reason, a study with 1152 test runs was carried out on a test rig to investigate the relationship between the feed force and the lateral force as a function of the hammer drill setup on the vibrations at the hammer drill housing and main handle. The experiment showed that the feed (p = < .001, up to r = 0.57) and lateral (p = < .001, up to r = 0.77) forces had an influence on the vibrations of the hammer drill. However, these depended strongly on the technical system and hence cannot be generalized. Furthermore, it was proven that the impact frequency of the hammer drill was reduced by increasing both the feed force (p = < .001, r = 0.55) and the lateral force (p = < .001, r = 0.23). The findings can not only be used by engineers and scientists to further develop vibration standards, but also to design more ergonomic hammer drills. Hence, the vibration decoupling of hammer drills should be redesigned so that lateral forces do not lead to an increase in vibrations that are harmful to the user.  相似文献   

13.
We present a randomized parallel list ranking algorithm for distributed memory multiprocessors, using a BSP type model. We first describe a simple version which requires, with high probability, log(3p)+log ln(n)=Õ(logp+log logn) communication rounds (h-relations withh=Õ(n/p)) andÕ(n/p)) local computation. We then outline an improved version that requires high probability, onlyr?(4k+6) log(2/3p)+8=Õ(k logp) communication rounds wherek=min{i?0 |ln(i+1)n?(2/3p)2i+1}. Notekn) is an extremely small number. Forn andp?4, the value ofk is at most 2. Hence, for a given number of processors,p, the number of communication rounds required is, for all practical purposes, independent ofn. Forn?1, 500,000 and 4?p?2048, the number of communication rounds in our algorithm is bounded, with high probability, by 78, but the actual number of communication rounds observed so far is 25 in the worst case. Forn?10010100 and 4?p?2048, the number of communication rounds in our algorithm is bounded, with high probability, by 118; and we conjecture that the actual number of communication rounds required will not exceed 50. Our algorithm has a considerably smaller member of communication rounds than the list ranking algorithm used in Reid-Miller’s empirical study of parallel list ranking on the Cray C-90.(1) To our knowledge, Reid-Miller’s algorithm(1) was the fastest list ranking implementation so far. Therefore, we expect that our result will have considerable practical relevance.  相似文献   

14.
The paper proves a theorem which states that for input signal sources of probabilities: p(s) = 2?l, one can find an optimal encoding procedure.  相似文献   

15.
We study the edge-coloring problem in the message-passing model of distributed computing. This is one of the most fundamental problems in this area. Currently, the best-known deterministic algorithms for (2Δ ?1)-edge-coloring requires O(Δ) +  log* n time (Panconesi and Rizzi in Distrib Comput 14(2):97–100, 2001), where Δ is the maximum degree of the input graph. Also, recent results of Barenboim and Elkin (2010) for vertex-coloring imply that one can get an O(Δ)-edge-coloring in ${O(\Delta^{\epsilon}\cdot \log n)}$ time, and an ${O(\Delta^{1 + \epsilon})}$ -edge-coloring in O(log Δ log n) time, for an arbitrarily small constant ${\epsilon > 0}$ . In this paper we devise a significantly faster deterministic edge-coloring algorithm. Specifically, our algorithm computes an O(Δ)-edge-coloring in ${O(\Delta^{\epsilon}) + \log* n}$ time, and an ${O(\Delta^{1 + \epsilon})}$ -edge-coloring in O(log Δ) +  log* n time. This result improves the state-of-the-art running time for deterministic edge-coloring with this number of colors in almost the entire range of maximum degree Δ. Moreover, it improves it exponentially in a wide range of Δ, specifically, for 2 Ω(log*n) ≤ Δ ≤ polylog(n). In addition, for small values of Δ (up to log1 - δ n, for some fixed δ > 0) our deterministic algorithm outperforms all the existing randomized algorithms for this problem. Also, our algorithm is the first O(Δ)-edge-coloring algorithm that has running time o(Δ) + log* n, for the entire range of Δ. All previous (deterministic and randomized) O(Δ)-edge-coloring algorithms require ${\Omega(\min \{\Delta, \sqrt{\log n}\ \})}$ time. On our way to these results we study the vertex-coloring problem on graphs with bounded neighborhood independence. This is a large family of graphs, which strictly includes line graphs of r-hypergraphs (i.e., hypergraphs in which each hyperedge contains r or less vertices) for rO(1), and graphs of bounded growth. We devise a very fast deterministic algorithm for vertex-coloring graphs with bounded neighborhood independence. This algorithm directly gives rise to our edge-coloring algorithms, which apply to general graphs. Our main technical contribution is a subroutine that computes an O(Δ/p)-defective p-vertex coloring of graphs with bounded neighborhood independence in O(p 2) + log* n time, for a parameter p, 1 ≤ pΔ. In all previous efficient distributed routines for m-defective p-coloring the product m· p is super-linear in Δ. In our routine this product is linear in Δ, and this enables us to speed up the algorithm drastically.  相似文献   

16.
Assessments of tree/grass fractional cover in savannahs using remote sensing are challenging due to the heterogeneous mixture of the two plant functional types. Time-series decomposition models can be used to characterize vegetation phenology from satellite data, but have rarely been used for attributing phenological signal components to different plant functional types. Here, tree/grass dynamics are assessed in savannah ecosystems using time-series decomposition of 14 years of Moderate Resolution Imaging Spectroradiometer (MODIS) normalized difference vegetation index data acquired from 2002 to 2015. The decomposition method uses harmonic analysis and tests the individual harmonic terms for statistical significance. Field data of fractional cover of trees and grasses were collected for 28 plots in Kruger National Park, South Africa. Matching MODIS pixels were analysed for their tree/grass phenological signals. Tree/grass annual and interannual variability were then assessed based on the harmonic models. In most harmonic cycles, grass-dominated sites had higher amplitudes than tree-dominated sites, while the tree green-up started earlier than grasses, before the start of the wet season. While changes in tree phenology are gradual, grasses present higher variability over time. Tree cover showed a significant correlation with the amplitude (r (correlation coefficient) = ?0.59, p = 0.001) and phase of the first harmonic term (= ?0.73, p = 0.0001) and the number of cycles of the second harmonic term (= 0. 56, p = 0.002). Grass cover was also significantly correlated with the amplitude (r = 0. 51, p = 0.005) and phase of the first harmonic term (r = 0.55, p = 0.002) and the number of cycles of the second harmonic term (r = ?0.52, p = 0.005). The positive correlation of grass cover with phase and negative correlation with number of cycles is indicating a late greening period and higher variability, respectively. Tree cover estimated from the phase of the strongest harmonic term showed a positive correlation with field-measured tree cover (R2 (coefficient of determination) = 0.55, p < 0.01, slope = 0.93, root mean square error = 13.26%). The estimated tree cover also had a strong correlation with the woody cover map (r = 0.78, p < 0.01) produced by Bucini. The results show that MODIS time-series data can be used to estimate the fractional tree cover in heterogeneous savannahs from the phase of the plant functional type’s phenological behaviour. This study shows that harmonic analysis is able to discriminate between fractional cover by trees and grasses in savannahs. The quantitative analysis of tree/grass phenology from satellite time-series data enables a better understanding of the dynamics of the tree/grass competition and coexistence.  相似文献   

17.
A k-query locally decodable code (LDC) C : Σ n → Γ N encodes each message x into a codeword C(x) such that each symbol of x can be probabilistically recovered by querying only k coordinates of C(x), even after a constant fraction of the coordinates has been corrupted. Yekhanin (in J ACM 55:1–16, 2008) constructed a 3-query LDC of subexponential length, N = exp(exp(O(log n/log log n))), under the assumption that there are infinitely many Mersenne primes. Efremenko (in Proceedings of the 41st annual ACM symposium on theory of computing, ACM, New York, 2009) constructed a 3-query LDC of length ${N_{2}={\rm exp}({\rm exp} (O(\sqrt{\log n\log\log n})))}$ with no assumption, and a 2 r -query LDC of length ${N_{r}={\rm exp}({\rm exp}(O(\sqrt[r]{\log n(\log \log n)^{r-1}})))}$ , for every integer r ≥ 2. Itoh and Suzuki (in IEICE Trans Inform Syst E93-D 2:263–270, 2010) gave a composition method in Efremenko’s framework and constructed a 3 · 2 r-2-query LDC of length N r , for every integer r ≥ 4, which improved the query complexity of Efremenko’s LDC of the same length by a factor of 3/4. The main ingredient of Efremenko’s construction is the Grolmusz construction for super-polynomial size set-systems with restricted intersections, over ${\mathbb{Z}_m}$ , where m possesses a certain “good” algebraic property (related to the “algebraic niceness” property of Yekhanin in J ACM 55:1–16, 2008). Efremenko constructed a 3-query LDC based on m = 511 and left as an open problem to find other numbers that offer the same property for LDC constructions. In this paper, we develop the algebraic theory behind the constructions of Yekhanin (in J ACM 55:1–16, 2008) and Efremenko (in Proceedings of the 41st annual ACM symposium on theory of computing, ACM, New York, 2009), in an attempt to understand the “algebraic niceness” phenomenon in ${\mathbb{Z}_m}$ . We show that every integer mpq = 2 t ?1, where p, q, and t are prime, possesses the same good algebraic property as m = 511 that allows savings in query complexity. We identify 50 numbers of this form by computer search, which together with 511, are then applied to gain improvements on query complexity via Itoh and Suzuki’s composition method. More precisely, we construct a ${3^{\lceil r/2\rceil}}$ -query LDC for every positive integer r < 104 and a ${\left\lfloor (3/4)^{51} \cdot 2^{r}\right\rfloor}$ -query LDC for every integer r ≥ 104, both of length N r , improving the 2 r queries used by Efremenko (in Proceedings of the 41st annual ACM symposium on theory of computing, ACM, New York, 2009) and 3 · 2 r-2 queries used by Itoh and Suzuki (in IEICE Trans Inform Syst E93-D 2:263–270, 2010). We also obtain new efficient private information retrieval (PIR) schemes from the new query-efficient LDCs.  相似文献   

18.
For any angle α<2π, we show that any connected communication graph that is induced by a set P of n transceivers using omni-directional antennas of radius 1, can be replaced by a strongly connected communication graph, in which each transceiver in P is equipped with a directional antenna of angle α and radius r dir, for some constant r dir=r dir(α). Moreover, the new communication graph is a c-spanner of the original graph, for some constant c=c(α), with respect to number of hops.  相似文献   

19.
Virtual reality games for rehabilitation are attracting increasing growth. In particular, there is a demand for games that allow therapists to identify an individual’s difficulties and customize the control of variables, such as speed, size, distance, as well as visual and auditory feedback. This study presents and describes a virtual reality software package (Bridge Games) to promote rehabilitation of individuals living with disabilities and highlights preliminary researches of its use for implementing motor learning and rehabilitation. First, the study presents seven games in the software package that can be chosen by the rehabilitation team, considering the patient’s needs. All game characteristics are described including name, function presentation, objective and valuable measurements for rehabilitation. Second, preliminary results illustrate some applications of two games, considering 343 people with various disabilities and health status. Based on the results, in the Coincident Timing game, there was a main effect of movement sensor type (in this instance the most functional device was the keyboard when compared with Kinect and touch screen) on average time reached by sample analyzed, F(2, 225) = 4.42, p < 0.05. Similarly, in the Challenge! game, a main effect was found for movement sensor type. However, in this case, touch screen provided better performance than Kinect and Leap Motion, F(2, 709) = 5.90, p < 0.01. Thus, Bridge Games is a possible software game to quantify motor learning. Moreover, the findings suggest that motor skills might be practiced differently depending on the environmental interface in which the game may be used.  相似文献   

20.
《Ergonomics》2012,55(11):1540-1550
Abstract

Portable ladders incidents remain a major cause of falls from heights. This study reported field observations of environments, work conditions and safety behaviour involving portable ladders and their correlations with self-reported safety performance. Seventy-five professional installers of a company in the cable and other pay TV industry were observed for 320 ladder usages at their worksites. The participants also filled out a questionnaire to measure self-reported safety performance. Proper setup on slippery surfaces, correct method for ladder inclination setup and ladder secured at the bottom had the lowest compliance with best practices and training guidelines. The observation compliance score was found to have significant correlation with straight ladder inclined angle (Pearson’s r = 0.23, p < 0.0002) and employees’ self-reported safety participation (r = 0.29, p < 0.01). The results provide a broad perspective on employees’ safety compliance and identify areas for improving safety behaviours.

Practitioner Summary: A checklist was used while observing professional installers of a cable company for portable ladder usage at their worksites. Items that had the lowest compliance with best practices and training guidelines were identified. The results provide a broad perspective on employees’ safety compliance and identify areas for improving safety behaviours.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号