首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The current research presents a theoretically sound model of the effects of the characteristics of information systems (IS) on the perception of end-users regarding computer self-efficacy and outcome expectations. The relationships among factors of small- and medium-sized enterprises in Taiwan are examined based on the IS success model and social cognitive theory. A mail survey was conducted, generating 284 usable responses with a total response rate of 51.64%. Structural equation modeling was employed to assess the relationships among related constructs. Data analysis shows that (1) no direct links exist between computer self-efficacy and either information quality or service quality, although certain effects are observable on system quality; (2) the relationships between outcome expectations and both system quality and service quality are significant; however, the relationship with information quality is insignificant; and (3) outcome expectations mediate the effects of computer self-efficacy on end-user satisfaction. The implications of the results are provided, and directions for future research are discussed in the study.  相似文献   

2.
在传统调试过程中,缺陷定位通常作为程序修复的前置步骤.最近,一种新型调试框架(统一化调试)被提出.不同于传统调试中缺陷定位和程序修复的单向连接方式,统一化调试首次建立了定位与修复之间的双向连接机制,从而达到同时提升两个领域的效果.作为首个统一化调试技术,ProFL利用程序修复过程中伴随产生的大量补丁执行信息逆向地提升已...  相似文献   

3.
ABSTRACT

This paper provides a taxonomy of secure software systems engineering (SSE) by surveying and organizing relevant SSE research and presents current trends in SSE, on-going challenges, and models for reasoning about threats and vulnerabilities. Several challenging questions related to risk assessment/mitigation (e.g., “what is the likelihood of attack”) as well as practical questions (e.g., “where do vulnerabilities originate” and “how can vulnerabilities be prevented”) are addressed.  相似文献   

4.
Selecting the appropriate method for determining the information requirements for an executive information system (EIS) depends on the characteristics of the users of the system (i.e., the executives), the potential applications of the system, and the environment in which the system will be used. IS managers charged with the task of developing an EIS can obtain these critical requirements by direct methods (e.g., participating in executive meetings) or by indirect means (e.g., using a software program to track executive use of the EIS), all of which are discussed in this article.  相似文献   

5.
We model the reliability allocation and prediction process across a hierarchical software system comprised of modules, subsystems, and system. We experiment in modeling complex reliability software systems using several software reliability models to test the feasibility of the process and to evaluate the accuracy of the models for this application. This is a subject deserving research and experimentation because this type of system is implemented in safety-critical projects, such as National Aeronautics and Space Administration (NASA) flight software modules, that we use in our experiments. Given the reliability requirement of a software system in the software planning or design stage, we predict each module’s reliability and their relationships (e.g., reliability interactions among modules, subsystems, and system), Our critical interfaces and components are failure-mode sequences and the modules that comprise these sequences, respectively. In addition, we evaluate how sensitive the achievement of reliability goals is to predicted component reliabilities that do not meet expectations.  相似文献   

6.
Abstract. DeLone & McLean (2003) propose an updated information systems (IS) success model and suggest that it can be extended to investigating e‐commerce systems success. However, the updated IS success model has not been empirically validated in the context of e‐commerce. Further, the existing IS/e‐commerce success models have been subject to considerable debate on the ‘IS Use’ and ‘Perceived Usefulness’ constructs, and the nomological structure of the updated DeLone and McLean model is somewhat inconsistent with the IS acceptance and marketing literature. Based on the IS and marketing literature, this paper respecifies and validates a multidimensional model for assessing e‐commerce systems success. The validated model consists of six dimensions: Information Quality, System Quality, Service Quality, Perceived Value, User Satisfaction and Intention to Reuse. Structural equation modelling techniques were applied to data collected by questionnaire from 240 users of e‐commerce systems in Taiwan. The empirical evidence suggests that Intention to Reuse is affected by Perceived Value and User Satisfaction, which, in turn, are influenced by Information Quality, System Quality and Service Quality. The nomological structure of the respecified e‐commerce systems success model is concurred with that of the technology acceptance model (TAM) in the IS field and the consumer behaviour models in the traditional business‐to‐business and retail contexts. The findings of this study provide several important implications for research and practice. This paper concludes by discussing the contributions of this study and the limitations that could be addressed in future studies.  相似文献   

7.
We outline a Punctuated Socio-Technical Information System Change model. The model recognizes both incremental and punctuated socio-technical change in the context of information systems at multiple levels – the work system level, the building system level, and the organizational environment. It uses socio-technical event sequences and their properties to explain how a change outcome emerged. The critical events in these sequences correspond to gaps in socio-technical systems. By conceiving information system (IS) change as a multi-level and punctuated sequence of socio-technical events, IS researchers can conceive plausible and accurate process explanations of IS change outcomes, including IS failures. Such explanations are located in the middle range and thus avoid the highly abstract and stylized closed-boxed factor models of change, but go beyond the idiographic open box histories of singular change processes.  相似文献   

8.
Previous work in natural language génération has exploited discourse focus to guide the selection of propositional content and the génération of referring expressions (e.g., pronominalization, définite noun phrase génération). However, there are many other sources of contextual information which can be used to constrain linguistic realization. The realization of certain classes of temporal and spatial référents, for example, can be guided by more detailed models of time and space. Therefore, this article first identifies a number of contextual coordinates or points of référence (known as indexicals in linguistics). Next, we formalize three of these contextual coordinates – topic, time, and space – in a computational model. In doing so, the article describes the use of a Reichenbachian temporal model which is exploited to guide the realization of verb tense and aspect as well as the realization of temporal référents (e.g., temporal connectives and adverbials such as "meanwhile" and "ten minutes later"). The article then describes a spatial model which is used to guide the linguistic realization of spatial référents (e.g., "here,""there") and spatial adverbials (e.g., "two miles away"). We discuss the relation between these temporal and spatial models and illustrate their use in generating text from two different application systems. We find that the combined use of topical, temporal, and spatial contextual coordinates enhances the fluency, connectivity, and conciseness of the resulting text.  相似文献   

9.
If citizens’ behavior threatens to harm others or seems not to be in their own interest (e.g., risking severe head injuries by riding a motorcycle without a helmet), it is not uncommon for governments to attempt to change that behavior. Governmental policy makers can apply established tools from the governmental toolbox to this end (e.g., laws, regulations, incentives, and disincentives). Alternatively, they can employ new tools that capitalize on the wealth of knowledge about human behavior and behavior change that has been accumulated in the behavioral sciences (e.g., psychology and economics). Two contrasting approaches to behavior change are nudge policies and boost policies. These policies rest on fundamentally different research programs on bounded rationality, namely, the heuristics and biases program and the simple heuristics program, respectively. This article examines the policy–theory coherence of each approach. To this end, it identifies the necessary assumptions underlying each policy and analyzes to what extent these assumptions are implied by the theoretical commitments of the respective research program. Two key results of this analysis are that the two policy approaches rest on diverging assumptions and that both suffer from disconnects with the respective theoretical program, but to different degrees: Nudging appears to be more adversely affected than boosting does. The article concludes with a discussion of the limits of the chosen evaluative dimension, policy–theory coherence, and reviews some other benchmarks on which policy programs can be assessed.  相似文献   

10.
11.
In general, logic programs are undirected, i.e., there is no concept of “input” and “output” arguments to a procedure. An argument may be used either as an input or as an output argument, and programs may be executed either in a “forward” direction or in a “backward” direction. However, it is often the case that in a given program, a predicate is used with some of its arguments used consistently as input arguments and others as output arguments. Such mode information can be used by a compiler to effect various optimizations. This paper considers the problem of automatically inferring the models of the predicates in a program. The dataflow analysis we use is more powerful than approaches relying on syntactic characteristics of programs. Our work differs from that of Mellish in that (1) we give a sound and efficient treatment of variable aliasing in mode inference; (2) by propagating instantiation information using state transformations rather than through dependencies between variables, we achieve greater precision in the treatment of unification, e.g. through =/2; and (3) we describe an efficient implementation based on the dynamic generation of customized mode interpreters. Several optimizations to improve the performance of the mode inference algorithm are described, as are various program optimizations based on mode information.  相似文献   

12.
Discovering colored Petri nets from event logs   总被引:1,自引:0,他引:1  
Process-aware information systems typically log events (e.g., in transaction logs or audit trails) related to the actual execution of business processes. Analysis of these execution logs may reveal important knowledge that can help organizations to improve the quality of their services. Starting from a process model, which can be discovered by conventional process mining algorithms, we analyze how data attributes influence the choices made in the process based on past process executions using decision mining, also referred to as decision point analysis. In this paper we describe how the resulting model (including the discovered data dependencies) can be represented as a Colored Petri Net (CPN), and how further perspectives, such as the performance and organizational perspective, can be incorporated. We also present a CPN Tools Export plug-in implemented within the ProM framework. Using this plug-in, simulation models in ProM obtained via a combination of various process mining techniques can be exported to CPN Tools. We believe that the combination of automatic discovery of process models using ProM and the simulation capabilities of CPN Tools offers an innovative way to improve business processes. The discovered process model describes reality better than most hand-crafted simulation models. Moreover, the simulation models are constructed in such a way that it is easy to explore various redesigns. A. Rozinat’s research was supported by the IOP program of the Dutch Ministry of Economic Affairs. M. Song’s research was supported by the Technology Foundation STW.  相似文献   

13.
The present study sheds light on what employers believe to be important hard skills and soft skills for entry-level information systems (IS) positions. Results from a survey of N = 73 U.S.-based and international employers suggest that soft skills are significantly more important than hard skills for entry-level IS positions. The most important hard skills are Microsoft Office, database/data warehouse/SQL, and knowledge of security, while the most important soft skills are willingness to learn, critical thinking, and attitude. Employer characteristics (e.g., industry, size, location) do not affect the importance of hard skills and soft skills. Findings of this study are discussed in light of previous research. Results of this study can help to guide universities in IS curriculum development.  相似文献   

14.
Rules are an accepted means of representing knowledge for virtually every domain. Traditional machine learning methods derive rules by exploring sets of examples using statistical or information theoretic techniques. Alternatively, rules can be discovered through methods of Evolutionary Computation such as genetic algorithms and learning classifier systems.In recent years, new models of learning classifier systems have been developed which have resulted in successful applications in a wide variety of domains (e.g., autonomous robotics, classification, knowledge discovery, modeling). These models have led to a resurgence of this area which for a certain period appeared almost at a dead end. This paper overviews the recent developments in learning classifier systems research, the new models, and the most interesting applications, suggesting some of the most relevant future research directions.  相似文献   

15.
Information systems (IS) research on user involvement has primarily theorized relationships between developers, managers and users in systems development. However, so far, marginal attention has been paid to differences in user involvement practices between information systems. This paper explores user involvement in developing mobile and temporarily interconnected systems (MTIS). We refer to MTIS as heterogeneous systems that rely on network technologies for increasing the ubiquity of information services for users on the move. Such systems are becoming increasingly important in leveraging, e.g. car infotainment, supply chain management and wireless e‐commerce. With particular emphasis on the nature of MTIS and its implications for user involvement, the paper analyses the systems development process of an action research project. The findings suggest that user involvement practices need to be adapted to accommodate features of this class of systems. Being an early attempt to trace the implications of technology features such as use context switches and temporary system relationships, the paper contributes to the development of an updated theory of the user role in an era of increased system complexity and stakeholder ambiguity.  相似文献   

16.
Many scientists believe that all pulse-coupled neural networks are toy models that are far away from the biological reality. We show, however, that a huge class of biophysically detailed and biologically plausible neural-network models can be transformed into a canonical pulse-coupled form by a piece-wise continuous, possibly noninvertible, change of variables. Such transformations exist when a network satisfies a number of conditions; e,g., it is weakly connected; the neurons are Class 1 excitable (i.e., they can generate action potentials with an arbitrary small frequency); and the synapses between neurons are conventional (i.e., axo-dendritic and axe-somatic). Thus, the difference between studying the pulse-coupled model and Hodgkin-Huxley-type neural networks is just a matter of a coordinate change. Therefore, any piece of information about the pulse-coupled model is valuable since it tells something about all weakly connected networks of Class 1 neurons. For example, we show that the pulse-coupled network of identical neurons does not synchronize in-phase. This confirms Ermentrout's (1996) result that weakly connected Class 1 neurons are difficult to synchronize, regardless of the equations that describe dynamics of each cell.  相似文献   

17.
In a large network of computers or wireless sensors, each of the components (henceforth, peers) has some data about the global state of the system. Much of the system's functionality such as message routing, information retrieval and load sharing relies on modeling the global state. We refer to the outcome of the function (e.g., the load experienced by each peer) as the emph{model} of the system. Since the state of the system is constantly changing, it is necessary to keep the models up-to-date. Computing global data mining models e.g. decision trees, $k$-means clustering in large distributed systems may be very costly due to the scale of the system and due to communication cost, which may be high. The cost further increases in a dynamic scenario when the data changes rapidly. In this paper we describe a two step approach for dealing with these costs. First, we describe a highly efficient emph{local} algorithm which can be used to monitor a wide class of data mining models. Then, we use this algorithm as a feedback loop for the monitoring of complex functions of the data such as its $k$-means clustering. The theoretical claims are corroborated with a thorough experimental analysis.  相似文献   

18.
Virtual Symposium on Virtual Mind   总被引:2,自引:2,他引:0  
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaninguful conversation). These higher levels of interpretability are called ‘virtual’ systems. If such a virtual system is interpretable as if it had a mind, is such a ‘virtual mind’ real? This is the question addressed in this ‘virtual’ symposium, originally conducted electronically among four cognitive scientists. Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: a real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one.  相似文献   

19.
In this paper, we introduce computer-aided microfluidics (CAMF), a process that allows the creation of complex microfluidic structures from their concept to the actual chip within a day. During design and testing of new microfluidic systems, rapid and frequent design modifications have to be carried out. For this purpose, a device using maskless projection lithography based on a digital mirror device (DMD) has been developed. Digital mask layouts may be created using any graphics program (Microsoft Paint, Adobe Photoshop) and can be used as such by the custom-written control software of the system. However, we suggest using another approach: direct importing of three-dimensional digital computer-aided design (CAD) models from which mask information can be directly parsed. This process is advantageous as commercial 3D-CAD systems allow the rapid generation of static or parameterized models which can be used for computerized analyses like, e.g., flow simulation. After model validation, the mask information is extracted from these models and directly used by the lithography device. A chip or replication master is then created by means of lithography using curable monomers or resists as, e.g., Accura 60 or SU-8. With CAMF, the whole process from digital 3D model creation to actually running the experiment can be done within a day.  相似文献   

20.
Business processes leave trails in a variety of data sources (e.g., audit trails, databases, and transaction logs). Hence, every process instance can be described by a trace, i.e., a sequence of events. Process mining techniques are able to extract knowledge from such traces and provide a welcome extension to the repertoire of business process analysis techniques. Recently, process mining techniques have been adopted in various commercial BPM systems (e.g., BPM|one, Futura Reflect, ARIS PPM, Fujitsu Interstage, Businesscape, Iontas PDF, and QPR PA). Unfortunately, traditional process discovery algorithms have problems dealing with less structured processes. The resulting models are difficult to comprehend or even misleading. Therefore, we propose a new approach based on trace alignment. The goal is to align traces in such a way that event logs can be explored easily. Trace alignment can be used to explore the process in the early stages of analysis and to answer specific questions in later stages of analysis. Hence, it complements existing process mining techniques focusing on discovery and conformance checking. The proposed techniques have been implemented as plugins in the ProM framework. We report the results of trace alignment on one synthetic and two real-life event logs, and show that trace alignment has significant promise in process diagnostic efforts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号