首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 14 毫秒
1.
The business value of investments in Information Systems (IS) has been, and is predicted to remain, one of the major research topics for IS researchers. While the vast majority of research papers on IS business value find empirical evidence in favour of both the operational and strategic relevance of IS, the fundamental question of the causal relationship between IS investments and business value remains partly unexplained. Three research tasks are essential requisites on the path towards addressing this epistemological question: the synthesis of existing knowledge, the identification of a lack of knowledge and the proposition of paths for closing the knowledge gaps. This paper considers each of these tasks. Research findings include that correlations between IS investments and productivity vary widely among companies and that the mismeasurement of IS investment impact may be rooted in delayed effects. Key limitations of current research are based on the ambiguity and fuzziness of IS business value, the neglected disaggregation of IS investments, and the unexplained process of creating internal and competitive value. Addressing the limitations we suggest research paths, such as the identification of synergy opportunities of IS assets, and the explanation of relationships between IS innovation and change in IS capabilities.  相似文献   

2.
RFID and privacy: what consumers really want and fear   总被引:1,自引:0,他引:1  
This article investigates the conflicting area of user benefits arising through item level radio frequency identification (RFID) tagging and a desire for privacy. It distinguishes between three approaches feasible to address consumer privacy concerns. One is to kill RFID tags at store exits. The second is to lock tags and have user unlock them if they want to initiate reader communication (user model). The third is to let the network access users’ RFID tags while adhering to a privacy protocol (network model). The perception and reactions of future users to these three privacy enhancing technologies (PETs) are compared in the present article and an attempt is made to understand the reasoning behind their preferences. The main conclusion is that users do not trust complex PETs as they are envisioned today. Instead, they prefer to kill RFID chips at store exits even if they appreciate after sales services. Enhancing trust through security and privacy ‘visibility’ as well as PET simplicity may be the road to take for PET engineers in UbiComp.  相似文献   

3.
We humans usually think in words; to represent our opinion about, e.g., the size of an object, it is sufficient to pick one of the few (say, five) words used to describe size (“tiny,” “small,” “medium,” etc.). Indicating which of 5 words we have chosen takes 3 bits. However, in the modern computer representations of uncertainty, real numbers are used to represent this “fuzziness.” A real number takes 10 times more memory to store, and therefore, processing a real number takes 10 times longer than it should. Therefore, for the computers to reach the ability of a human brain, Zadeh proposed to represent and process uncertainty in the computer by storing and processing the very words that humans use, without translating them into real numbers (he called this idea granularity). If we try to define operations with words, we run into the following problem: e.g., if we define “tiny” + “tiny” as “tiny,” then we will have to make a counter-intuitive conclusion that the sum of any number of tiny objects is also tiny. If we define “tiny” + “tiny” as “small,” we may be overestimating the size. To overcome this problem, we suggest to use nondeterministic (probabilistic) operations with words. For example, in the above case, “tiny” + “tiny” is, with some probability, equal to “tiny,” and with some other probability, equal to “small.” We also analyze the advantages and disadvantages of this approach: The main advantage is that we now have granularity and we can thus speed up processing uncertainty. The main disadvantage is that in some cases, when defining symmetric associative operations for the set of words, we must give up either symmetry, or associativity. Luckily, this necessity is not always happening: in some cases, we can define symmetric associative operations. © 1997 John Wiley & Sons, Inc.  相似文献   

4.
Online social networks have become an essential part of social and work life. They enable users to share, discuss, and create content together with various others. Obviously, not all content is meant to be seen by all. It is extremely important to ensure that content is only shown to those that are approved by the content’s owner so that the owner’s privacy is preserved. Generally, online social networks are promising to preserve privacy through privacy agreements, but still everyday new privacy leakages are taking place. Ideally, online social networks should be able to manage and maintain their agreements through well-founded methods. However, the dynamic nature of the online social networks is making it difficult to keep private information contained. We have developed $\mathcal{PROTOSS}$ , a run time tool for detecting and predicting $\mathcal{PR}\mathrm{ivacy}\ \mathrm{vi}\mathcal{O}\mathrm{la}\mathcal{T}\mathrm{ions}\ \mathrm{in}\ \mathcal{O}\mathrm{nline}\ \mathcal{S}\mathrm{ocial}\ \mathrm{network}\mathcal{S}$ . $\mathcal{PROTOSS}$ captures relations among users, their privacy agreements with an online social network operator, as well as domain-based semantic information and rules. It uses model checking to detect if relations among the users will result in the violation of privacy agreements. It can further use the semantic information to infer possible violations that have not been specified by the user explicitly. In addition to detection, $\mathcal{PROTOSS}$ can predict possible future violations by feeding in a hypothetical future world state. Through a running example, we show that $\mathcal{PROTOSS}$ can detect and predict subtle leakages, similar to the ones reported in real life examples. We study the performance of our system on the scenario as well as on an existing Facebook dataset.  相似文献   

5.
6.
7.
Fifteen years ago, research started on SQL-Tutor, the first constraint-based tutor. The initial efforts were focused on evaluating Constraint-Based Modeling (CBM), its effectiveness and applicability to various instructional domains. Since then, we extended CBM in a number of ways, and developed many constraint-based tutors. Our tutors teach both well- and ill-defined domains and tasks, and deal with domain- and meta-level skills. We have supported mainly individual learning, but also the acquisition of collaborative skills. Authoring support for constraint-based tutors is now available, as well as mature, well-tested deployment environments. Our current research focuses on building affect-sensitive and motivational tutors. Over the period of fifteen years, CBM has progressed from a theoretical idea to a mature, reliable and effective methodology for developing effective tutors.  相似文献   

8.
9.
《Information & Management》2019,56(4):476-492
Human Flesh Search (HFS) has profusely proliferated in recent years. We adopted the multiple-case, replication design approach, to investigate 21 HFS cases spanning from 2007 to 2014, comprising 204,008 posts from 67,375 posters. Our preliminary analyses depicted consistent patterns across categories of legal, moral, and entertainment. We established three contagious mechanisms driving HFS behavior (i.e., fueled emotionalism, blind acceptance, and collective amnesia) and two non-contagious mechanisms (i.e., isolated dissension and sluggish update). We also shed insights on the technology affordances and constraints. Critical theoretical contributions and important practical implications for HFS initiators, facilitators, and designers of forums are discussed.  相似文献   

10.
Bach  J. 《Software, IEEE》1995,12(2):96-98
All software managers are faced with three P's when software is to be built: people, problem, and process. Should each of the three P's be given equal weight? Should one be elevated to a more important role than the others? Are managers focusing too much attention on the wrong P? The author offers answers to these questions and argues in favour of software heroism  相似文献   

11.
Antipatterns are known as poor solutions to recurring problems. For example, Brown et al. and Fowler define practices concerning poor design or implementation solutions. However, we know that the source code lexicon is part of the factors that affect the psychological complexity of a program, i.e., factors that make a program difficult to understand and maintain by humans. The aim of this work is to identify recurring poor practices related to inconsistencies among the naming, documentation, and implementation of an entity—called Linguistic Antipatterns (LAs)—that may impair program understanding. To this end, we first mine examples of such inconsistencies in real open-source projects and abstract them into a catalog of 17 recurring LAs related to methods and attributes. Then, to understand the relevancy of LAs, we perform two empirical studies with developers—30 external (i.e., not familiar with the code) and 14 internal (i.e., people developing or maintaining the code). Results indicate that the majority of the participants perceive LAs as poor practices and therefore must be avoided—69 % and 51 % of the external and internal developers, respectively. As further evidence of LAs’ validity, open source developers that were made aware of LAs reacted to the issue by making code changes in 10 % of the cases. Finally, in order to facilitate the use of LAs in practice, we identified a subset of LAs which were universally agreed upon as being problematic; those which had a clear dissonance between code behavior and lexicon.  相似文献   

12.
13.
Forking is the creation of a new software repository by copying another repository. Though forking is controversial in traditional open source software (OSS) community, it is encouraged and is a built-in feature in GitHub. Developers freely fork repositories, use codes as their own and make changes. A deep understanding of repository forking can provide important insights for OSS community and GitHub. In this paper, we explore why and how developers fork what from whom in GitHub. We collect a dataset containing 236,344 developers and 1,841,324 forks. We make surveys, and analyze programming languages and owners of forked repositories. Our main observations are: (1) Developers fork repositories to submit pull requests, fix bugs, add new features and keep copies etc. Developers find repositories to fork from various sources: search engines, external sites (e.g., Twitter, Reddit), social relationships, etc. More than 42 % of developers that we have surveyed agree that an automated recommendation tool is useful to help them pick repositories to fork, while more than 44.4 % of developers do not value a recommendation tool. Developers care about repository owners when they fork repositories. (2) A repository written in a developer’s preferred programming language is more likely to be forked. (3) Developers mostly fork repositories from creators. In comparison with unattractive repository owners, attractive repository owners have higher percentage of organizations, more followers and earlier registration in GitHub. Our results show that forking is mainly used for making contributions of original repositories, and it is beneficial for OSS community. Moreover, our results show the value of recommendation and provide important insights for GitHub to recommend repositories.  相似文献   

14.
15.
ABSTRACT

Employing ICT platforms has the potential to improve efforts to assist displaced people, or to liberate them in being more able to help each other, or both. And while platform development has resulted in a patchwork of initiatives – an electronic version of ‘letting a thousand flowers bloom’ – there are patterns emerging as to which flowers grow and have ‘staying power’ as compared to ones that wilt and die. Using a partial application of grounded theory, we analyze 47 platforms, categorizing the services they provide, the functionalities they use, and the extent to which end users are involved in initial design and ongoing modification. We found that 23% offer one-way communication, 72%, provide two-way communication, 74% involve crowdsourcing and 43% use artificial intelligence. For future developers, we offer a preliminary list of what leads to a successful ICT initiative for refugees and migrants. Finally, we list ethical considerations for all stakeholders.  相似文献   

16.
Beacons are primarily radio, ultrasonic, optical, laser or other types of signals that indicate the proximity or location of a device or its readiness to perform a task. Beacon signals also carry several critical, constantly changing parameters, such as power-supply information, relative address, location, timestamp, signal strength, available bandwidth resources, temperature and pressure. Although transparent to the user community, beacon signals have made wireless systems more intelligent and human-like. They are an integral part of numerous scientific and commercial applications, ranging from mobile networks to search-and-rescue operations and location-tracking systems. Beacon signals help synchronize, coordinate and manage electronic resources using miniscule bandwidth. Researchers continue to improve their functionality by increasing signal coverage while optimizing energy consumption. Beacon signals' imperceptibility and usefulness in minimizing communication delays and interference are spurring exploratory efforts in many domains, ranging from the home to outer space  相似文献   

17.
《IT Professional》2004,6(3):51-57
As the software market grows more competitive, companies with difficult-to-use products face higher customer support costs as they attempt to rework user interfaces to fix usability problems. It is time that software designers tried reversing the direction - get user requirements, design the user interaction with the product's features, and then code the features - to produce software from the user's perspective and logic.  相似文献   

18.
19.
The paper presents an analysis of human reaching movements in manipulation of flexible objects. To predict the trajectory of human hand, a minimum crackle criterion has been recently introduced in literature. A different approach is explored in this paper. To explain the trajectory formation, we resort to the minimum hand jerk criterion. First, we show that this criterion matches well experimental data available in literature. Next, we argue that, contrary to the minimum crackle criterion, the minimum hand jerk criterion produces bounded hand velocity profiles for multimass flexible objects. Finally, we present initial experimental results confirming the applicability of the minimum hand jerk criterion to the prediction of reaching movements with multimass objects.  相似文献   

20.
Various vendors claim to have solved the online piracy problem in manners that can enable the safe sale of information to Web users. The technologies they sell include watermarking, hardware key "dongles", encryption and monitoring. The major place people worry about digital rights management's (DRM) impact is with existing Web sites that provide free information. Whatever benefits it might bring, DRM software's downside will undoubtedly include the collection of large amounts of user data that could prove valuable to marketers, snoopers, and others besides those who sell online content.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号