排序方式: 共有62条查询结果,搜索用时 31 毫秒
51.
Technology is a mobile and integral part of many work places, and computers and other information and communication technology have made many users’ work life easier, but technology can also contribute to problems in the cognitive work environment and, over time, create technostress. Much previous research on technostress has focused on the use of digital technology and its effects, measured by questionnaires, but in order to further examine how technostress arises in the modern workplace, a wider perspective on interactions between people and technology is needed. This paper applies a distributed cognition perspective to human–technology interaction, investigated through an observational field study. Distributed cognition focuses on the organisation of cognitive systems, and technostress in this perspective becomes an emergent phenomenon within a complex and dynamic socio-technical system. A well-established questionnaire was also used (for a limited sample), to gain a frame of reference for the results from the qualitative part of the study. The implications are that common questionnaire-based approaches very well can and should be complemented with a broader perspective to study causes of technostress. Based on the present study, a redefinition of technostress is also proposed. 相似文献
52.
Collecting metadata on a family of programs is useful not only for generating statistical data on the programs but also for future re-engineering and reuse purposes. In this paper we discuss an industrial case where a project library is used to store visual programs and a database to store the metadata on these programs. The visual language in question is a domain-specific language, Function Block Language (FBL) that is used in Metso Automation for writing automation control programs. For reuse, program analysis and re-engineering activities and various data and program analysis methods are applied to study the FBL programs. Metadata stored in a database is used to provide advanced program analysis support; from the large amount of programs, the metadata allows focusing the analysis to certain kinds of programs. In this paper, we discuss the role and usage of the metadata in program analysis techniques applied to FBL programs. 相似文献
53.
Berven FS Kroksveen AC Berle M Rajalahti T Flikka K Arneberg R Myhr KM Vedeler C Kvalheim OM Ulvik RJ 《Proteomics. Clinical applications》2007,1(7):699-711
Cerebrospinal fluid (CSF) is a perfect source to search for new biomarkers to improve early diagnosis of neurological diseases. Standardization of pre‐analytical handling of the sample is, however, important to obtain acceptable analytical quality. In the present study, MALDI‐TOF MS was used to examine the influence of pre‐analytical sample procedures on the low molecular weight (MW) CSF proteome. Different storage conditions like temperature and duration or the addition of as little as 0.2 µL blood/mL neat CSF caused significant changes in the mass spectra. The performance of different types of MW cut‐off spin cartridges from different suppliers used to enrich the low MW CSF proteome showed great variance in cut‐off accuracy, stability and reproducibility. The described analytical method achieved a polypeptide discriminating limit of approximately 800 pM, two to three orders of magnitude lower than reported for plasma. Based on this study, we recommend that CSF is centrifuged immediately after sampling, prior to storage at –80ºC without addition of protease inhibitors. Guanidinium hydrochloride is preferred to break protein‐protein interactions. A spin cartridge with cut‐off limit above the intended analytical mass range is recommended. Our study contributes to the important task of developing standardized pre‐analytical protocols for the proteomic study of CSF. 相似文献
54.
Shimba is a reverse engineering environment to support the understanding of Java software systems. Shimba integrates the Rigi and SCED tools to analyze and visualize the static and dynamic aspects of a subject system. The static software artifacts and their dependencies are extracted from Java byte code and viewed as directed graphs using the Rigi reverse engineering environment. The run‐time information is generated by running the target software under a customized SDK debugger. The generated information is viewed as sequence diagrams using the SCED tool. In SCED, statechart diagrams can be synthesized automatically from sequence diagrams, allowing the user to investigate the overall run‐time behavior of objects in the target system. Shimba provides facilities to manage the different diagrams and to trace artifacts and relations across views. In Shimba, SCED sequence diagrams are used to slice the static dependency graphs produced by Rigi. In turn, Rigi graphs are used to guide the generation of SCED sequence diagrams and to raise their level of abstraction. We show how the information exchange among the views enables goal‐driven reverse engineering tasks and aids the overall understanding of the target software system. The FUJABA software system serves as a case study to illustrate and validate the Shimba reverse engineering environment. Copyright © 2001 John Wiley & Sons, Ltd. 相似文献
55.
56.
Chemical composition, mass size distribution and source analysis of long-range transported wildfire smokes in Helsinki 总被引:1,自引:0,他引:1
Sillanpää M Saarikoski S Hillamo R Pennanen A Makkonen U Spolnik Z Van Grieken R Koskentalo T Salonen RO 《The Science of the total environment》2005,350(1-3):119-135
Special episodes of long-range transported particulate (PM) air pollution were investigated in a one-month field campaign at an urban background site in Helsinki, Finland. A total of nine size-segregated PM samplings of 3- or 4-day duration were made between August 23 and September 23, 2002. During this warm and unusually dry period there were two (labelled P2 and P5) sampling periods when the PM2.5 mass concentration increased remarkably. According to the hourly-measured PM data and backward air mass trajectories, P2 (Aug 23-26) represented a single, 64-h episode of long-range transported aerosol, whereas P5 (Sept 5-9) was a mixture of two 16- and 14-h episodes and usual seasonal air quality. The large chemical data set, based on analyses made by ion chromatography, inductively coupled plasma mass spectrometry, X-ray fluorescence analysis and smoke stain reflectometry, demonstrated that the PM2.5 mass concentrations of biomass signatures (i.e. levoglucosan, oxalate and potassium) and of some other compounds associated with biomass combustion (succinate and malonate) increased remarkably in P2. Crustal elements (Fe, Al, Ca and Si) and unidentified matter, presumably consisting to a large extent of organic material, were also increased in P2. The PM2.5 composition in P5 was different from that in P2, as the inorganic secondary aerosols (NO3-, SO4(2-), NH4+) and many metals reached their highest concentration in this period. The water-soluble fraction of potassium, lead and manganese increased in both P2 and P5. Mass size distributions (0.035-10 microm) showed that a large accumulation mode mainly caused the episodically increased PM2.5 concentrations. An interesting observation was that the episodes had no obvious impact on the Aitken mode. Finally, the strongly increased concentrations of biomass signatures in accumulation mode proved that the episode in P2 was due to long-range transported biomass combustion aerosol. 相似文献
57.
Laila Stordrange Tarja Rajalahti Fred Olav Libnau 《Chemometrics and Intelligent Laboratory Systems》2004,70(2):137-145
Multiway methods are tested for their ability to explore and model near-infrared (NIR) spectra from a pharmaceutical batch process. The study reveals that blocking of data having a nonlinear behaviour into higher-order array can improve the predictive ability. The variation in each control point is independently modelled and N-way techniques overcome the nonlinearity problem. Important issues as variable selection and how to fill in for missing values have been discussed. Variable selection was shown to be essential to be able to perform multiway modelling. For spectra not yet monitored, use of mean spectra from calibration set gave close to the best results. Decomposing the spectra by N-way techniques gave additional information about the chemical system. To support the results simulated data sets were used. 相似文献
58.
59.
Tarja Rajalahti Reidar Arneberg Frode S. Berven Kjell-Morten Myhr Rune J. Ulvik Olav M. Kvalheim 《Chemometrics and Intelligent Laboratory Systems》2009,95(1):35-48
This work presents a new method for variable selection in complex spectral profiles. The method is validated by comparing samples from cerebrospinal fluid (CSF) with the same samples spiked with peptide and protein standards at different concentration levels. Partial least squares discriminant analysis (PLS-DA) attempts to separate two groups of samples by regressing on a y-vector consisting of zeros and ones in the PLS decomposition. In most cases, several PLS components are needed to optimize the discrimination between groups. This creates difficulties for the interpretation of the model. By using the y-vector as a target, it is possible to transform the PLS components to obtain a single predictive target-projected component analogously to the predictive component in orthogonal partial least squares discriminant analysis (OPLS-DA). By calculating the ratio between explained and residual variance of the spectral variables on the target-projected component, a selectivity ratio plot is obtained that can be used for variable selection. Used on whole mass spectral profiles of pure and spiked CSF, we can detect peptide in the low molecular mass range (740–9000 Da) at least down to 400 pM level without severe problems with false biomarker candidates. Similarly, we detect added proteins at least down to 2 nM level in the medium mass range (6000–17,500 Da). Target projection represents the optimal way to fit a latent variable decomposition to a known target, but the selectivity ratio plot can be used for OPLS as well as other methods that produce a single predictive component. Comparison with some commonly used tools for variable selection shows that the selectivity ratio plot has the best performance. This observation is attributed to the fact that target projection utilizes both the predictive ability (regression coefficients) and the explanatory ability (spectral variance/covariance matrix) for the calculation of the selectivity ratio. 相似文献
60.
Removal of soft deposits from the distribution system improves the drinking water quality 总被引:2,自引:0,他引:2
Deterioration in drinking water quality in distribution networks represents a problem in drinking water distribution. These can be an increase in microbial numbers, an elevated concentration of iron or increased turbidity, all of which affect taste, odor and color in the drinking water. We studied if pipe cleaning would improve the drinking water quality in pipelines. Cleaning was arranged by flushing the pipes with compressed air and water. The numbers of bacteria and the concentrations of iron and turbidity in drinking water were highest at 9 p.m., when the water consumption was highest. Soft deposits inside the pipeline were occasionally released to bulk water, increasing the concentrations of iron, bacteria, microbially available organic carbon and phosphorus in drinking water. The cleaning of the pipeline decreased the diurnal variation in drinking water quality. With respect to iron, only short-term positive effects were obtained. However, removing of the nutrient-rich soft deposits did decrease the microbial growth in the distribution system during summer when there were favorable warm temperatures for microbial growth. No Norwalk-like viruses or coliform bacteria were detected in the soft deposits, in contrast to the high numbers of heterotrophic bacteria. 相似文献