首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
Abstract— Small‐form‐factor liquid‐crystal displays (LCDs) are mainly used in mobile applications (e.g., mobile phones, PDAs, and portable game consoles) but also in digital still cameras, video cameras, automotive applications, etc. Like all active‐matrix LCDs, mobile displays suffer from motion blur caused by the sample‐and‐hold effect. One option for improving the motion portrayal on active‐matrix LCDs is the use of a scanning backlight, which results in an imaging behavior similar to the one present in impulsive displays. In this paper, the realization of a scanning backlight for mobile displays is reported. This employs a backlight with seven individually lit segments for reducing the motion blur. Results of perception experiments performed with two identical displays confirm the benefit of using this technology. Optimal driving conditions result in a major improvement in motion portrayal on mobile LCDs.  相似文献   

2.
More than 50 years ago, information technology (IT) began to change society, the economy, and industries worldwide. This change has included waves of technological disruption that have been exploited by entrepreneurial actors who seize the associated new opportunities. Research on related phenomena is spread across different disciplines. Recently, there have been calls for further research on the marriage of information systems (IS) and entrepreneurship. We review 292 articles in the IS, entrepreneurship, and general and strategic management literature to create an overview of the IT‐associated entrepreneurship research landscape. On the basis of that review, we elaborate on the different roles that IT can assume to support entrepreneurial operations and value creation in these settings. Our findings suggest that IT plays four major roles in entrepreneurial operations: as a facilitator, making the operations of start‐ups easier; as a mediator for new ventures' operations; as an outcome of entrepreneurial operations; and as a ubiquity, becoming the business model itself. Leveraging these roles of IT, we develop a set of definitions to clear up definition uncertainties surrounding IT‐associated new ventures such as digital start‐ups and digital business models. We also outline a research agenda for IT‐associated entrepreneurship research based on identified roles, types, and gaps.  相似文献   

3.
Mobile text messaging is one of the world’s most popular asynchronous communication tools, but few empirical studies have examined users’ abilities and attitudes toward such technologies. The study employs 2 distinct, yet complementary, expectancy‐based constructs (i.e., self‐efficacy and locus of control) to predict anxiety and attitude valence toward mobile text messaging. Survey data collected from text messaging users show that the attitude toward text messaging behaviors can be examined through their beliefs in their competence and sense of control. Results indicate that enhancing users’ ability and their sense of personal control can further the use of future mobile text‐based applications and services. These findings suggest that future research should consider incorporating these variables into existing information technology adoption frameworks.  相似文献   

4.
5.
This paper interrogates the currently pervasive discourse of the ‘net generation’ finding the concept of the ‘digital native’ especially problematic, both empirically and conceptually. We draw on a research project of South African higher education students' access to and use of Information and Communication Technologies (ICTs) to show that age is not a determining factor in students' digital lives; rather, their familiarity and experience using ICTs is more relevant. We also demonstrate that the notion of a generation of ‘digital natives’ is inaccurate: those with such attributes are effectively a digital elite. Instead of a new net generation growing up to replace an older analogue generation, there is a deepening digital divide in South Africa characterized not by age but by access and opportunity; indeed, digital apartheid is alive and well. We suggest that the possibility for digital democracy does exist in the form of a mobile society which is not age specific, and which is ubiquitous. Finally, we propose redefining the concepts ‘digital’, ‘net’, ‘native’, and ‘generation’ in favour of reclaiming the term ‘digitizen’.  相似文献   

6.
This paper aims to investigate the input‐to‐state exponents (IS‐e) and the related input‐to‐state stability (ISS) for delayed discrete‐time systems (DDSs). By using the method of variation of parameters and introducing notions of uniform and weak uniform M‐matrix, the estimates for 3 kinds of IS‐e are derived for time‐varying DDSs. The exponential ISS conditions with parts suitable for infinite delays are thus established, by which the difference from the time‐invariant case is shown. The exponential stability of a time‐varying DDS with zero external input cannot guarantee its ISS. Moreover, based on the IS‐e estimates for DDSs, the exponential ISS under events criteria for DDSs with impulsive effects are obtained. The results are then applied in 1 example to test synchronization in the sense of ISS for a delayed discrete‐time network, where the impulsive control is designed to stabilize such an asynchronous network to the synchronization.  相似文献   

7.
For music students in the early stages of learning, the music may seem to be hidden behind the scores. To support home practising, Juntunen has created the Playback Orchestra method with which the students can practise with the support of the notation program playback of the full orchestra. The results of testing the method with first‐grade string instrument students showed that the group who used the playback method learned faster than the group who did not. The clear and expressive body movements that are developed effectively by the playback method also provide support for leading a group by playing. The aim of this recent pilot study was to discover if improvisation benefits from an audio learning component. The research is a qualitative case study combined with quasi‐experimental tests and quantitative analyses. The improvisation task was to describe a storm in a long musical tale, Mickey Mouse in a Storm, which had several episodes in different atmospheres. The results showed that the playback group was clearly better in terms of ‘joy of playing’, ‘concentrating’, ‘finding one's own improvising ideas’ and ‘understanding the overall picture’. The most crucial finding was that ‘intensive continuity’ improved faster in the playback group.  相似文献   

8.
Java just‐in‐time compilers often compile only hot methods because the compilation overhead is a part of the running time. This requires precise and efficient hot spot detection, which includes distinguishing hot methods from cold ones, detecting them as early as possible, and paying a small detection overhead. Hot spot detection is especially important in embedded applications because they show more of a start‐up phase behavior of a regular application where methods are not executed heavily, so the hot methods are not definite. Because a long‐running method is likely to be a hot method, we can detect a hot method by measuring its running time during interpretation. However, precise measurement of the running time during execution is too expensive, especially in embedded systems, so many counter‐based heuristics have been proposed to estimate it such as Oracle's HotSpot heuristic. One problem is that although the overhead of these heuristics is low, they do not estimate the running time precisely, which may lead to imprecise hot spot detection.This paper proposes a new hot spot detection heuristic called flow‐sensitive runtime estimation, which can estimate the running time more precisely than others with a relatively low overhead. It only counts important bytecode instructions dynamically, but it can obtain the precise count of all interpreted bytecode instructions with a simple arithmetic calculation. We also propose a static analysis technique to predict those hot methods which spends a huge execution time once invoked, so as to compile them at their first invocation. Our experimental results show that these techniques can improve the performance by as much as an average of 7.4% compared with the HotSpot heuristic for the benchmarks when they run once, which is often regarded as showing the start‐up phase behavior. Even for real embedded Java applications such as the digital TV Java Xlet applications, our techniques can improve the user response time by an average of 7.1%. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Abstract

It is clear that the role of the information resource is changing. Major publishers have been slow to adapt to the emergence of a global digital medium, but there are now signs that a great deal of information will be delivered on-line, (although at present only about 25 databases account for 80% of usage in UK and optical publishing is still in its early stages). However, digital publishing on the Internet — with services for libraries such as just-in-time purchasing and delivery, for example — will be a driving force in creating the ‘global digital medium’. One issue that will become increasingly relevant is how the individual user accesses rich multimedia data in the most appropriate way. The ‘digital university campus’ and the ‘digital library’ are coming to be important concepts, with the aim that users of information services will receive information on-line supported by a ‘ubiquistructure’ of information technology. For the ‘digital campus’ this means that not only scholarly but also teaching activities are based on interactive access to information, and where not only the digital library but also the digital bookshop and the digital classroom are becoming possible with the development of 140Mb/s SuperJANET links. However, it is recognised that libraries will not be truly digital for the foreseeable future, and that libraries will maintain traditional and digital media side by side. In this paper, reporting on work at the University of Bristol's Educational Technology Service multimedia resources unit MRU, and the University of the West of England's Centre for Personal Information Management (in collaboration with Hewlett-Packard Research Laboratories and the University of Bristol's Centre for Communications Research), we look the ‘digital library’ and ‘digital campus’ from the perspective of the individual user and her information needs. We are particularly interested in the use of small, mobile computers as access points to the global digital medium. We suggest that, in an environment of change — where the traditional campus and the traditional library exist alongside the digital campus and digital library — the most appropriate form of access technology is based on ‘personal technology’ which allows a linking between digital information and traditional paper-based information.  相似文献   

10.
Abstract. DeLone & McLean (2003) propose an updated information systems (IS) success model and suggest that it can be extended to investigating e‐commerce systems success. However, the updated IS success model has not been empirically validated in the context of e‐commerce. Further, the existing IS/e‐commerce success models have been subject to considerable debate on the ‘IS Use’ and ‘Perceived Usefulness’ constructs, and the nomological structure of the updated DeLone and McLean model is somewhat inconsistent with the IS acceptance and marketing literature. Based on the IS and marketing literature, this paper respecifies and validates a multidimensional model for assessing e‐commerce systems success. The validated model consists of six dimensions: Information Quality, System Quality, Service Quality, Perceived Value, User Satisfaction and Intention to Reuse. Structural equation modelling techniques were applied to data collected by questionnaire from 240 users of e‐commerce systems in Taiwan. The empirical evidence suggests that Intention to Reuse is affected by Perceived Value and User Satisfaction, which, in turn, are influenced by Information Quality, System Quality and Service Quality. The nomological structure of the respecified e‐commerce systems success model is concurred with that of the technology acceptance model (TAM) in the IS field and the consumer behaviour models in the traditional business‐to‐business and retail contexts. The findings of this study provide several important implications for research and practice. This paper concludes by discussing the contributions of this study and the limitations that could be addressed in future studies.  相似文献   

11.
One of the general location‐based services (LBSs) is the monitoring of real‐time locations of moving objects. When the number of moving objects is large and the task of monitoring is carried out on mobile devices, the monitoring service suffers from constraints of screen size, computing speed, and network bandwidth. In the present paper, a two‐phase scale‐based reduction method (SRM) consisting of a zoom phase and a mosaic phase, is proposed to overcome these constraints. The zoom phase reduces the original monitoring area which, in turn, undergoes further reduction in the mosaic phase. The performance was measured with the use of two ratios: the reduction ratio (RRatio) and the transmission ratio (TRatio). From the experimental results, the lowest RRatio was 52%, i.e. almost half of the original data size was reduced. The lowest average TRatio was also 52% for the worst case, i.e. the entire original monitoring area was displayed on the mobile device. Moreover, the display time was shortened from 14.3 to 0.7 s. These results show that the use of the two‐phase SRM is practical and efficient when applied to the monitoring service on mobile devices. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
With their ubiquity, mobile information systems (IS) may be used in ways that challenge the dynamics of organisational control, forcing IS scholars to revisit the panopticon metaphor and possibly offer new conceptual tools for theorising about information technology (IT)-based organisational control. Yet little IS research has offered critical reflections on the use of the panopticon to represent the control potential of mobile IS. This study investigates whether the way mobile IS are engaged in the workplace reinforce panoptic control systems or generate other types of control logics, requiring another conceptual lens. A qualitative exploratory case study investigated a consulting company whose professionals equipped themselves with mobile IS. The study reveals the emergence of a subtle, invisible form of ‘free control’ through mobile IS. Although consultants are mobile, flexible, and autonomous, a powerful communication and information network keeps them in a position of ‘allowed subjection’. Free control is characterised by a shift in the location of authority, a time-related discipline, a deep sense of trust, and adherence to organisational norms that the professionals themselves co-construct. These characteristics, which render such control even more pernicious than panoptic arrangements, deserve more attention in further IS research.  相似文献   

13.
Most discussions on the digital divide have predominantly focused on social disparities in the physical accessibility of information and communication technologies (ICT), and the proposed solutions are related to providing low cost access to the underprivileged. The mobile phone has been considered as a good solution due to its relatively low cost. This paper, based on an empirical study in Sri Lanka, demonstrates that even though the underprivileged population has adopted the mobile phone, most of the computer based communication facilities available in the phones are ‘inaccessible’ to such users due the objectification of broader social inequalities in the design of phones. In other words, the digital divide is objectified in the design.  相似文献   

14.
This survey gives an overview of the current state of the art in GPU techniques for interactive large‐scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga‐, tera‐ and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out‐of‐core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. ‘output‐sensitive’ algorithms and system designs. This leads to recent output‐sensitive approaches that are ‘ray‐guided’, ‘visualization‐driven’ or ‘display‐aware’. In this survey, we focus on these characteristics and propose a new categorization of GPU‐based large‐scale volume visualization techniques based on the notions of actual output‐resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context—the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey.  相似文献   

15.
KDDML‐G is a middleware language and system for knowledge discovery on the grid. The challenge that motivated the development of a grid‐enabled version of the ‘standalone’ KDDML (Knowledge Discovery in Databases Markup Language) environment was on one side to exploit the parallelism offered by the grid environment, and on the other side to overcome the problem of data immovability, a quite frequent restriction on real‐world data collections that has principally a privacy‐preserving purpose. The last question is addressed by moving the code and ‘mining’ the data ‘on the place’, that is by adapting the computation to the availability and localization of the data. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
The scatterplot matrix (SPLOM) is a well‐established technique to visually explore high‐dimensional data sets. It is characterized by the number of scatterplots (plots) of which it consists of. Unfortunately, this number quadratically grows with the number of the data set’s dimensions. Thus, an SPLOM scales very poorly. Consequently, the usefulness of SPLOMs is restricted to a small number of dimensions. For this, several approaches already exist to explore such ‘small’ SPLOMs. Those approaches address the scalability problem just indirectly and without solving it. Therefore, we introduce a new greedy approach to manage ‘large’ SPLOMs with more than 100 dimensions. We establish a combined visualization and interaction scheme that produces intuitively interpretable SPLOMs by combining known quality measures, a pre‐process reordering and a perception‐based abstraction. With this scheme, the user can interactively find large amounts of relevant plots in large SPLOMs.  相似文献   

17.
Carl Staelin 《Software》2005,35(11):1079-1105
lmbench is a powerful and extensible suite of micro‐benchmarks that measures a variety of important aspects of system performance. It has a powerful timing harness that manages most of the ‘housekeeping’ chores associated with benchmarking, making it easy to create new benchmarks that analyze systems or components of specific interest to the user. In many ways lmbench is a Swiss army knife for performance analysis. It includes an extensive suite of micro‐benchmarks that give powerful insights into system performance. For those aspects of system or application performance not covered by the suite, it is generally a simple task to create new benchmarks using the timing harness. lmbench is written in ANSI‐C and uses POSIX interfaces, so it is portable across a wide variety of systems and architectures. It also includes powerful new tools that measure performance under scalable loads to analyze SMP and clustered system performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
Depth‐of‐field is one of the most crucial rendering effects for synthesizing photorealistic images. Unfortunately, this effect is also extremely costly. It can take hundreds to thousands of samples to achieve noise‐free results using Monte Carlo integration. This paper introduces an efficient adaptive depth‐of‐field rendering algorithm that achieves noise‐free results using significantly fewer samples. Our algorithm consists of two main phases: adaptive sampling and image reconstruction. In the adaptive sampling phase, the adaptive sample density is determined by a ‘blur‐size’ map and ‘pixel‐variance’ map computed in the initialization. In the image reconstruction phase, based on the blur‐size map, we use a novel multiscale reconstruction filter to dramatically reduce the noise in the defocused areas where the sampled radiance has high variance. Because of the efficiency of this new filter, only a few samples are required. With the combination of the adaptive sampler and the multiscale filter, our algorithm renders near‐reference quality depth‐of‐field images with significantly fewer samples than previous techniques.  相似文献   

19.
Mining association rules in relational databases is a significant computational task with lots of applications. A fundamental ingredient of this task is the discovery of sets of attributes (itemsets) whose frequency in the data exceeds some threshold value. In this paper we describe two algorithms for completing the calculation of frequent sets using a tree structure for storing partial supports, called interim‐support (IS) tree. The first of our algorithms (T‐Tree‐First (TTF)) uses a novel tree pruning technique, based on the notion of (fixed‐prefix) potential inclusion, which is specially designed for trees that are implemented using only two pointers per node. This allows to implement the IS tree in a space‐efficient manner. The second algorithm (P‐Tree‐First (PTF)) explores the idea of storing the frequent itemsets in a second tree structure, called the total support tree (T‐tree); the main innovation lies in the use of multiple pointers per node, which provides rapid access to the nodes of the T‐tree and makes it possible to design a new, usually faster, method for updating them. Experimental comparison shows that these techniques result in considerable speedup for both algorithms compared with earlier approaches that also use IS trees (Principles of Data Mining and Knowledge Discovery, Proceedings of the 5th European Conference, PKDD, 2001, Freiburg, September 2001 (Lecture Notes in Artificial Intelligence, vol. 2168). Springer: Berlin, Heidelberg, 54–66; Journal of Knowledge‐Based Syst. 2000; 13 :141–149). Further comparison between the two new algorithms, shows that the PTF is generally faster on instances with a large number of frequent itemsets, provided that they are relatively short, whereas TTF is more appropriate whenever there exist few or quite long frequent itemsets; in addition, TTF behaves well on instances in which the densities of the items of the database have a high variance. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
We present an analysis of a longitudinal case study whose aim was to understand the processes of integration of a face‐to‐face and networked collaborative learning technology and pedagogy into a secondary school history‐geography classroom. Students carried out a sequence of argumentative tasks relating to sustainable development, including argument generation, sharing and elaboration, debate using a computer‐mediated communication, and organization of arguments in a shared diagram. Students' interactions and diagrams were analysed in terms of degree and quality of argumentativity, as well as catachresis (‘getting round’ the software to perform a non‐prescribed task). Results run counter to positive systems of ideas and values concerning collaborative learning and its technological mediation in that the scenario did not meet its pedagogical aims, having to be abandoned before its planned end. We discuss possible explanations for this ‘failure story’ in terms of the articulation between everyday, technology‐related and educational discourse genres, with their associated social milieux, as well as the social structure of the classroom. The relevance of these aspects for future attempts to integrate such technologies is discussed. In conclusion, we discuss a vision of learning that takes into account students who do not accept to play the educational game.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号