首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Information systems (IS) evaluation is a thorny problem. In this paper we re-examine the area in light of recent developments in the field. Our examination begins with an example of a common contemporary IS assessment problem, viz. evaluating outsourcing. The example highlights the organizational and political issues that make evaluation fraught with difficulties. The paper argues that IS evaluation is a ‘necessary evil’ but the context in which IS are developed and used has become much more demanding and complex. A conceptual framework, first proposed in an earlier paper as a way to classify the literature, is presented and brought up to date. The framework is then used to re-examine the outsourcing example, demonstrating the usefulness of the framework.  相似文献   

2.
Technological innovation increasingly requires operators in various applied settings to maintain vigilance for extended periods. However, standard psychometric tests typically predict less than 10% of performance variance. The present study (N = 462) aimed to apply the resource theory of sustained attention to construct a multivariate test battery for predicting battlefield vigilance. The battery included cognitive ability tests, a high-workload short vigilance task and subjective measures of stress response. Four versions of a 60- min simulated military battlefield monitoring task were constructed to represent different operational requirements. The test battery predicted 24–44% of criterion variance, depending on task version, suggesting that it may identify vigilant operators in military and other applied contexts. A multiple-groups path analysis showed that relationships between ability and vigilance were moderated by working memory demands. Findings are consistent with a diffuse theoretical concept of ‘resources’ in which performance energisation depends on multiple, loosely coupled processes.  相似文献   

3.
Serum and plasma from which serum is derived represent a substantial challenge for proteomics due to their complexity. A landmark plasma proteome study was initiated a decade ago by the Human Proteome Organization (HUPO) that had as an objective to examine the capabilities of existing technologies. Given the advances in proteomics and the continued interest in the plasma proteome, it would timely reassess the depth and breadth of analysis of plasma that can be achieved with current methodology and instrumentation. A collaborative project to define the plasma proteome and its variation, with a plan to build a plasma proteome database would be timely.  相似文献   

4.
Manufacturing industries often rely on quality control staff to ensure mistakes are detected before products are shipped to customers. Undetected errors can result in large financial and environmental costs to packaging companies and supermarkets but the contributors to such error are underexplored. The research reported in this paper investigated human error in the quality control checking of information displayed on the labels which accompany packaged fresh produce. Initial work sought to understand the demands of label‐checking in the packhouse environment, through interviews with key quality control staff, in situ observations, and the study of historical error data held by a fresh produce packaging company. This study highlighted the dynamic and cognitively challenging environment in which label‐checking occurred, while the historical error data indicated both the scale of the packhouse's work and the infrequency of error occurring. In a separate strand of laboratory‐based research, experienced and novice label‐checkers were presented with a simulated label‐checking task and a battery of computerized and pen‐and‐paper tests. These tasks were administered to determine whether cognitive abilities could predict label‐checking accuracy in a controlled laboratory environment. Stronger abilities in two cognitive processes (information processing speed and inhibition) predicted greater overall accuracy and higher detection of labeling errors. In identifying potential contributors to human error in the quality control checking of product labels both in situ and in the laboratory, the results are relevant to manufacturing, wherever information is printed on labels, especially when labeling processes depend upon human data entry and human quality control checking.  相似文献   

5.
《Ergonomics》2012,55(7):1051-1069
In this article, we offer a new, macroergonomics perspective on the long-debated issue of function allocation. We believe thinking in this domain needs to be realigned, moving away from the traditional microergonomics conceptualisation, concerned predominantly with task-based decisions, and towards a macroergonomics approach, viewing function allocation choices as central to effective systems design. We frame our arguments within a systems perspective, advocating that function allocation issues need to be on the agenda of all individuals with a wider interest in the human and organisational aspects of complex work systems, including people who commission, sponsor, design, implement and use such systems. We also argue that allocation decisions should form a transparent, explicit stage early in the systems design and development process, involve multiple stakeholders (including end-users), be evidence-based, framed within the language of risk and utilise iterative methods (e.g. scenarios planning techniques).

Practitioner Summary: This article presents a macroergonomics approach to function allocation, advocating its importance in effective systems design. Adopting a systems mindset, we argue function allocation should form an explicit stage early in the design process, involve multiple stakeholders, be evidence-based, framed within the language of risk and utilise iterative methods.  相似文献   

6.
Any attempt to explain the mind by building machines with minds must confront the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is “everything” a body with a mind can do? Turing's original “pen-pal” version of the Turing Test (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls instead for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body.  相似文献   

7.
The paper is devoted to the optimization and post-buckling behavior of columns elastically supported at both ends. The unimodal solutions are analyzed, and it is shown that for nonzero support stiffnesses they are not optimal. The bimodal formulation of the problem is set up. By using analytical expressions for bimodal columns obtained earlier, the bimodal optimal solutions are integrated for different values of the support stiffnesses. With the assumption of geometrical nonlinearity, the post-buckling behavior of the bimodal optimal columns is studied. It is shown that the initial post-buckling behavior is governed by four supercritical solutions emanating from the trivial equilibrium state at the critical load. The stability of the new equilibrium states is investigated by using the second variation of the total potential energy. It is shown that only two post-buckling equilibrium states are stable while the other two are unstable, this conclusion being valid for all considered values of the support stiffnesses. An important limit case of a clamped–simply supported column that has caused debate in many publications is analyzed.  相似文献   

8.
Four years after introduction of the first instrument for measurement of sparkle, the foundations have been reconsidered, and the pool of practical experience has been analyzed to provide a more detailed and complete picture of the subject matter. The following aspects are introduced and discussed: observation conditions and resulting requirements for imaging (sampling) and filtering, analysis of spatial periods and frequencies as a basis for filtering, spatial filtering concepts, sparkle in the frequency domain, sparkle evaluation based on analysis of single images and difference images, origins of unwanted sparkle components, scaling and offset in sparkle evaluation, and verification of the method.  相似文献   

9.
A wide range of user groups from policy makers to media commentators demand ever more spatially detailed information yet the desired data are often not available at fine spatial scales. Increasingly, small area estimation (SAE) techniques are called upon to fill in these informational gaps by downscaling survey outcome variables of interest based on the relationships seen with key covariate data. In the process SAE techniques both rely extensively on small area Census data to enable their estimation and offer potential future substitute data sources in the event of Census data becoming unavailable. Whilst statistical approaches to SAE routinely incorporate intervals of uncertainty around central point estimates in order to indicate their likely accuracy, the continued absence of such intervals from spatial microsimulation SAE approaches severely limits their utility and arguably represents their key methodological weakness. The present article presents an innovative approach to resolving this key methodological gap based on the estimation of variance of the between-area error term from a multilevel regression specification of the constraint selection for iterative proportional fitting (IPF). The performance of the estimated credible intervals are validated against known Census data at the target small area and show an extremely high level of performance. As well as offering an innovative solution to this long-standing methodological problem, it is hoped more broadly that the research will stimulate the spatial microsimulation community to adopt and build on these foundations so that we can collectively move to a position where intervals of uncertainty are delivered routinely around spatial microsimulation small area point estimates.  相似文献   

10.
The main objective of feature selection is to improve learning performance by selecting concise and informative feature subsets, which presents a challenging task for machine learning or pattern recognition applications due to the large and complex search space involved. This paper provides an in-depth examination of nature-inspired metaheuristic methods for the feature selection problem, with a focus on representation and search algorithms, as they have drawn significant interest from the feature selection community due to their potential for global search and simplicity. An analysis of various advanced approach types, along with their advantages and disadvantages, is presented in this study, with the goal of highlighting important issues and unanswered questions in the literature. The article provides advice for conducting future research more effectively to benefit this field of study, including guidance on identifying appropriate approaches to use in different scenarios.  相似文献   

11.
为解决柔性流水车间调度问题( flexible flow shop scheduling problem,FFSP),提出了一种基于精英个体集的自适应蝙蝠算法(self-adaptive elite bat algorithm,SEBA)。针对蝙蝠算法存在求解离散问题具有局限性、易陷入局部极值、优化结果精度低等问题,该算法采用ROV(ranked order value)编码方式,使算法适用于求解离散型的FFSP问题;提出基于汉明距离的精英个体集,由多个适应度高但相似度低的精英个体轮流引导种群进化,增强种群进化活力,避免寻优过程陷入局部极值;提出自适应位置更新机制,提高算法优化精度。最后采用不同规模的标准实例对改进算法进行测试,与已有算法进行对比,实验结果验证了改进蝙蝠算法求解FFSP问题的有效性。  相似文献   

12.
The more recent computer developments cause us to take a new look at human intelligence. The prevailing occidental view of human intelligence represents a very one-sided, logocentric approach, so that it is becoming more urgent to look for a more complete view. In this way, specific strengths of so-called human information processing are becoming particularly evident in a new way. To provide a general substantiation for this view, some elements of a phenomenological model for a dialectical coherence of human expressions of life are briefly outlined. The starting point is the everyday experience of constantly being confronted with contradictory situations. A model of polar, or dualistic, dialectic is proposed, which attempts to systematically establish the contradictions and contingencies of human life in theoretical structures. It is assumed that formal logic fails to work — strictly speaking — when applied to real situations of human interaction. Instead, definite negations are supposed to be involved. In this way, four polarities, pairs of concepts which are mutually dependent on each other and negate each other definitely, are presented: process and structure, the individual and general (societal), acting and imagining, subject-subject-relation and subject-object-relation (love and work). The latter gives rise to the dialectic of emotion and cognition, which plays a decisive role in a comparison of human and computer-“intelligence”. To cope with the specific human strengths, which are expressed through these polarities, self-learning computers would have to be able to act and to love like human beings and to grow up in a society like people. Furthermore, the learning process would have to implement the process-structure-dialectic with its aspects of finality. In spite of the recent breath-taking achievements of computers, one can not imagine how computers would ever achieve this. Computers are, however, thought of, by virtue of their complexity, being qualitatively different from machines. But human abilities still represent a new level of complexity: the level of dialectic.  相似文献   

13.
This paper reports on an investigation of career anchors of women in the information technology (IT) workforce that was directed at enhancing within‐gender theorising about career motivations of women in the IT profession. Our theoretical lens, the individual differences theory of gender and IT, enabled us to look more critically at how the effects of interventions are embedded in the range of women's career anchors that takes within‐gender variation into account. The analysis demonstrates that organisational interventions must be flexible enough to account for the diversity and variation among women. Further, the analysis shows that it is necessary to move away from ‘one size fits all’ organisational interventions that often reflect stereotypes about women in the IT workforce.  相似文献   

14.
Soft lifting refers to the process whereby a legally licensed software program is installed or copied in violation of its licensing agreement. Previous research on this pervasive kind of unethical computer use has mainly focused on the determinants of this unethical act, which are rooted in personal, economic, technological, cultural, socio-political, or legal domains. However, little is known about the symbolic power that soft lifting has on the sense of self. Based on recent advances in behavioral priming, we hypothesized that soft lifting can influence the signals one sends to oneself; more specifically, soft lifting may prime individuals to experience an inauthentic sense of self, which, in turn, prompts further unethical behavior. In Study 1, we showed that participants, primed with the memory of a recent soft lifting experience, cheated more than participants recalling a recent experience of purchasing authentic software or than control participants. Moreover, feelings of inauthenticity mediated the priming effect of soft lifting on dishonest behavior. In Study 2, participants primed with soft lifting showed a greater willingness to purchase a wide range of counterfeit products over authentic products. Besides those antecedents or correlates of soft lifting already identified in the literature, educators should pay more attention to the negative impact of soft lifting on the self-images of users, which may go beyond computer-related behaviors. Priming may provide a new direction for HCI researchers to examine the impact of computer-use-related factors on users' perceptions, motivations, and behaviors.  相似文献   

15.
16.
Nurse scheduling is a critical issue in the management of emergency department. Under the intense work environment, it is imperative to make quality nurse schedules in a most cost and time effective way. To this end, a spreadsheet-based two-stage heuristic approach is proposed for the nurse scheduling problem (NSP) in a local emergency department. First, an initial schedule satisfying all hard constraints is generated by the simple shift assignment heuristic. Second, the sequential local search algorithm is employed to improve the initial schedules by taking soft constraints (nurse preferences) into account. The proposed approach is benchmarked with the existing approach and 0–1 programming. The contribution of this paper is twofold. First, it is one of a few studies in nurse scheduling literature using heuristic approach to generate nurse schedules based on Excel spreadsheet. Therefore, users with little knowledge on linear programming and computer sciences can operate and change the scheduling algorithms easily. Second, while most studies on nurse scheduling are situated in hospitals, this paper attempts to bridge the research gap by investigating the NSP in the emergency department where the scheduling rules are much more restrictive due to the intense and dynamic work environment. Overall, our approach generates satisfactory schedules with higher level of user-friendliness, efficiency, and flexibility of rescheduling as compared to both the existing approach and 0–1 programming.  相似文献   

17.
Options are designed to hedge against risks to their underlying assets such as stocks. One method of forming option-hedging portfolios is using stochastic programming models. Stochastic programming models depend heavily on scenario generation, a challenging task. Another method is neutralizing the Greek risks derived from the Black–Scholes formula for pricing options. The formula expresses the option price as a function of the stock price, strike price, volatility, risk-free interest rate, and time to maturity. Greek risks are the derivatives of the option price with respect to these variables. Hedging Greek risks requires no human intervention for generating scenarios. Linear programming models have been proposed for constructing option portfolios with neutralized risks and maximized investment profit. However, problems with these models exist. First, feasible solutions that can perfectly neutralize the Greek risks might not exist. Second, models that involve multiple assets and their derivatives were incorrectly formulated. Finally, these models lack practicability because they consider no minimum transaction lots. Considering minimum transaction lots can exacerbate the infeasibility problem. These problems must be resolved before option hedging models can be applied further. This study presents a revised linear programming model for option portfolios with multiple underlying assets, and extends the model by incorporating it with a fuzzy goal programming method for considering minimum transaction lots. Numerical examples show that current models failed to obtain feasible solutions when minimum transaction lots were considered. By contrast, while the proposed model solved the problems efficiently.  相似文献   

18.
The assessment and selection of high-technology projects is a difficult decision making process at the National Aeronautic and Space Administration (NASA). This difficulty is due to the multiple and often conflicting objectives in addition to the inherent technical complexities and valuation uncertainties involved in the assessment process. As such, a systematic and transparent decision making process is needed to guide the assessment process, shape the decision outcomes and enable confident choices to be made. Various methods have been proposed to assess and select high-technology projects. However, applying these methods has become increasingly difficult in the space industry because there are many emerging risks implying that decisions are subject to significant uncertainty. The source of uncertainty can be vagueness or ambiguity. While vague data are uncertain because they lack detail or precision, ambiguous data are uncertain because they are subject to multiple interpretations. We propose a data envelopment analysis (DEA) model with ambiguity and vagueness. The vagueness of the objective functions is modeled by means of multi-objective fuzzy linear programming. The ambiguity of the input and output data is modeled with fuzzy sets and a new α-cut based method. The proposed models are linear, independent of α-cut variables, and capable of maximizing the satisfaction level of the fuzzy objectives and efficiency scores, simultaneously. Moreover, these models are capable of generating a common set of multipliers for all projects in a single run. A case study involving high-technology project selection at NASA is used to demonstrate the applicability of the proposed models and the efficacy of the procedures and algorithms.  相似文献   

19.
A variational way of deriving the relevant parameters of a cellular neural network (CNN) is introduced. The approach exploits the CNN spontaneous internal-energy decrease and is applicable when a given problem can be expressed in terms of an optimisation task. The presented approach is fully mathematical as compared with the typical heuristic search for the correct parameters in the literature on CNNs. This method is practically employed in recovering information on the three-dimensional structure of the environment, through the stereo vision problem. A CNN able to find the conjugate points in a stereogram is fully derived in the proposed framework. Results of computer simulations on several test cases are provided. Received: 1 August 1997 / Accepted: 29 September 1999  相似文献   

20.
This paper presents the formulation of a combinatorial optimization problem with the following characteristics: (i) the search space is the power set of a finite set structured as a Boolean lattice; (ii) the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to the sequential floating forward selection (SFFS), which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号