首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

The epidemic prevention cloud allows infection control professionals to streamline many of their reporting procedures, thereby improving patient safety in a cost-effective manner. Based on task-technology fit and status quo bias perspectives, this study develops an integrated model to explain individuals’ health information technology usage behaviour. We conducted a field survey in 30 Taiwan hospitals to collect data from infection control professionals with using experience of the epidemic prevention cloud. A total of 167 questionnaires were sent out, and 116 were returned from 18 hospitals. To test the proposed research hypothesis, we employed a structural equation model by the partial least squares method. The results found that both task – (p?<?.01) and technology-related characteristics (p?<?.001) influence task-technology fit. Task-technology fit has a positive effect on both utilisation (p?<?.001) and performance (p?<?.001), while it appears to have a negative effect on resistance to use (p?<?.001). Our results showed that resistance to use was caused by uncertainty costs (p?<?.01) and perceived value (p?<?.01). The results indicate the significant effect of utilisation on performance (p?<?.01). Further, the results indicate a significant negative effect of resistance to use on utilisation (p?<?.05). This study illustrates the importance of incorporating post-adoption resistance in technology adoption studies  相似文献   

2.
《Computer Networks》2008,52(7):1433-1450
Pricing in 3G and other communication networks may control and manage the utilisation of network resources. The available network resources get strained with increased usage levels, which results in poor service to the users. Most users prefer receiving high quality services at affordable costs. This requires the provision of QoS guarantees for network services at a low cost. In a real business scenario, this relationship is hard to achieve; moreover revenue sources for network operators have been shifting from the provision of network access to provisioning of rich services, e.g. multimedia services. To attain a functional compromise, we propose a pricing scheme that relies on service profiles to manage resource utilisation in a DiffServ-enabled 3G network. The service profiles define the QoS achieved for accessing services through a common resource pool, in which resource sharing is used to maximise network resource utilisation, user satisfaction and profits for the network operators. In an NGN scenario users would select pricing profiles according to their budgets, and the network will map these profiles to a set of QoS options that may translate to the choice of an access network for service access. In this paper, we present the mathematical model of the proposed pricing scheme, the proposed design of an evaluation framework, QoS performance results, and a service provisioning scenario illustrating the applicability of the proposed pricing scheme.  相似文献   

3.
Today's lifecycle of new and emerging products, increase in labour costs in developed countries and user's expectations or behaviours including frequently upgrading items with latest features, influence the growth rate of product disposal to landfill. To reduce the negative impact on the environment, global manufacturers need to take responsibility for designing sustainable products and implementing cleaner production systems for 3R operations (3R–Reuse/Remanufacture/Recycle). Nevertheless, there is still a lack of comprehensive measures for assessing product returns with recovery settings. In this paper, a framework for performance evaluation using design for six sigma methodology is developed to estimate utilisation value of a manufactured product with recovery settings, which accounts for total recovery cost, manufacturing lead-time, minimisation for landfill waste and quality characteristic. Finally, a numerical example based on these performance attributes to assess product utilisation value is presented.  相似文献   

4.
It is proposed that an optimal strategy for executing a join query in a distributed database system may be computed in a time which is bounded by a polynomial function of the number of relations and the size parameters of the network. The solution so unveiled considers both the transmission costs and the processing costs incurred in delivering the required result to the user that issued the query.The query specifies that several relational tables are to be coalesced and presented to the appropriate user. Undertaking this task demands the utilisation of limited system resources, so that a strategy for fulfilling the request that imposes minimal cost to the system should be devised. Both the processor sites, and the communications links that interconnect them, are utilised; an optimal strategy is one that minimises a weighted sum of processing and data transmission costs.An integer linear programming model of this problem was originally proposed in [1]; however, no suggestion was given as to how this model might be efficiently solved. By extending the earlier analysis, the recursive nature of the join computation is revealed. Further investigations then produce a modified relationship amenable to algorithmic solution; the resultant procedure has polynomial time and space requirements.  相似文献   

5.
6.
We consider the problem of locating replicas in a network to minimize communications costs. Under the assumption that the read-one-write-all policy is used to ensure data consistency, an optimization problem is formulated in which the cost function estimates the total communications costs. The paper concentrates on the study of the optimal communications cost as a function of the ratio between the frequency of the read and write operations. The problem is reformulated as a zero-one linear programming problem, and its connection to the p-median problem is explained. The general problem is proved to be NP-complete. For path graphs a dynamic programming algorithm for the problem is presented. Received: May 1993 / Accepted: June 2001  相似文献   

7.
Rescaling is possibly the most popular approach to cost‐sensitive learning. This approach works by rebalancing the classes according to their costs, and it can be realized in different ways, for example, re‐weighting or resampling the training examples in proportion to their costs, moving the decision boundaries of classifiers faraway from high‐cost classes in proportion to costs, etc. This approach is very effective in dealing with two‐class problems, yet some studies showed that it is often not so helpful on multi‐class problems. In this article, we try to explore why the rescaling approach is often helpless on multi‐class problems. Our analysis discloses that the rescaling approach works well when the costs are consistent, while directly applying it to multi‐class problems with inconsistent costs may not be a good choice. Based on this recognition, we advocate that before applying the rescaling approach, the consistency of the costs must be examined at first. If the costs are consistent, the rescaling approach can be conducted directly; otherwise it is better to apply rescaling after decomposing the multi‐class problem into a series of two‐class problems. An empirical study involving 20 multi‐class data sets and seven types of cost‐sensitive learners validates our proposal. Moreover, we show that the proposal is also helpful for class‐imbalance learning.  相似文献   

8.
Unger  Oren  Cidon  Israel 《World Wide Web》2004,7(3):315-336
The architecture of overlay networks should support high-performance and high-scalability at low costs. This becomes more crucial when communication, storage costs as well as service latencies grow with the exploding amounts of data exchanged and with the size and span of the overlay network. For that end, multicast methodologies can be used to deliver content from regional servers to end users, as well as for the timely and economical synchronization of content among the distributed servers. Another important architectural problem is the efficient allocation of objects to servers to minimize storage, delivery and update costs. In this work, we suggest a multicast based architecture and address the optimal allocation and replication of dynamic objects that are both consumed and updated. Our model network includes consumers which are served using multicast or unicast transmissions and media sources (that may be also consumers) which update the objects using multicast communication. General costs are associated with distribution (download) and update traffic as well as the storage of objects in the servers. Optimal object allocation algorithms for tree networks are presented with complexities of O(N) and O(N 2) in case of multicast and unicast distribution respectively. To our knowledge, the model of multicast distribution combined with multicast updates has not been analytically dealt before, despite its popularity in the industry.  相似文献   

9.
Algol 68 enables facilities for such things as arbitrary precision arithmetic to be provided in a particularly elegant and convenient way. The library segment mlaritha which provides such facilities is described. This segment enables numerical quantities to be stored and manipulated with almost the same degree of ease, or difficulty, as REAL quantities but with arbitrary and dynamically variable precision. The method of ‘NUMBER’ storage used in mlaritha is discussed in detail and the fundamental algorithms used for the arithmetic operations of addition, multiplication and division, etc., are described. Special attention is given to the ‘costs’ inherent in the use of the system; particularly in the time ‘costs’ of each of the operations and the dependence on precision.  相似文献   

10.
Recent development of hardware technologies such as communication medium which advances from wired to wireless has led to the emergence of mobile information systems. A major problem in such a mobile information system is how to locate mobile clients. This is named the location management issue. Two major costs are involved in managing a mobile client's location: the movement cost and the locating cost. Past methods can only minimize one of the two costs, but not both. The major contribution of this paper is to present methods that minimize both costs simultaneously. Our performance analysis proves that the proposed methods are superior to the past ones.  相似文献   

11.
In this paper we consider the capacitated lot-sizing problem (CLSP) with linear costs. It is known that this problem is NP-hard, but there exist special cases that can be solved in polynomial time. We derive a new O(T2) algorithm for the CLSP with non-increasing setup costs, general holding costs, non-increasing production costs and non-decreasing capacities over time, where T is the length of the model horizon. We show that in every iteration we do not consider more candidate solutions than the O(T2) algorithm proposed by [Chung and Lin, 1988. Management Science 34, 420–6]. We also develop a variant of our algorithm that is more efficient in the case of relatively large capacities. Numerical tests show the superior performance of our algorithms compared to the algorithm of [Chung and Lin, 1988. Management Science 34, 420–6].  相似文献   

12.
In 1982, Quisquater and Couvreur proposed an RSA variant, called RSA-CRT, based on the Chinese Remainder Theorem to speed up RSA decryption. In 1990, Wiener suggested another RSA variant, called Rebalanced-RSA, which further speeds up RSA decryption by shifting decryption costs to encryption costs. However, this approach essentially maximizes the encryption time since the public exponent e is generally about the same order of magnitude as the RSA modulus. In this paper, we introduce two variants of Rebalanced-RSA in which the public exponent e is much smaller than the modulus, thus reducing the encryption costs, while still maintaining low decryption costs. For a 1024-bit RSA modulus, our first variant (Scheme A) offers encryption times that are at least 2.6 times faster than that in the original Rebalanced-RSA, while the second variant (Scheme B) offers encryption times at least 3 times faster. In both variants, the decrease in encryption costs is obtained at the expense of slightly increased decryption costs and increased key generation costs. Thus, the variants proposed here are best suited for applications which require low costs in encryption and decryption.  相似文献   

13.
We prove a general monotonicity result about Nash flows in directed networks, which generalizes earlier results and can be used for the design of truthful mechanisms in the setting where each edge of the network is controlled by a different selfish agent, who incurs costs when her edge is used.  相似文献   

14.
This paper focuses on the design of H filter and event-triggered scheme for a class of continuous non-linear networked control systems (NCSs) based on fuzzy system appeared with time delays. First, we consider the discrete event-triggered (ET) scheme to make efficient utilisation of bandwidth. Under this ET-scheme, sensor releases the data only when our sampled-data plant violates the specific event-triggered condition. Second, the T-S fuzzy system is used to model the non-linear NCSs. Another purpose of this paper is to design filters involving delays. Such filters have a more general form than the delay-free filters that have been mostly considered in the traditional studies. By using the time-delay system, co-design of event-triggered scheme and H filter for the delayed NCSs is presented in a unified frame work. To choose the latest data packet and discard the dis-ordered packet logic, zero-order hold is inserted between the fuzzy filter and event generator. Then, by using the novel fuzzy Lyapunov–Krasovskii function approach with free-weighting matrix technique, H filter design of event-triggered delay NCSs is proposed. Finally, to show the effective result of our co-design method, a tunnel-diode example is given.  相似文献   

15.
On Capital Investment   总被引:8,自引:0,他引:8  
We deal with the problem of making capital investments in machines for manufacturing a product. Opportunities for investment occur over time, every such option consists of a capital cost for a new machine and a resulting productivity gain, i.e., a lower production cost for one unit of product. The goal is that of minimizing the total production costs and capital costs when future demand for the product being produced and investment opportunities are unknown. This can be viewed as a generalization of the ski-rental problem and related to the mortgage problem [3]. If all possible capital investments obey the rule that lower production costs require higher capital investments, then we present an algorithm with constant competitive ratio. If new opportunities may be strictly superior to previous ones (in terms of both capital cost and production cost), then we give an algorithm which is O (min{1+log C , 1+log log P , 1+log M }) competitive, where C is the ratio between the highest and the lowest capital costs, P is the ratio between the highest and the lowest production costs, and M is the number of investment opportunities. We also present a lower bound on the competitive ratio of any on-line algorithm for this case, which is Ω (min{log C , log log P / log log log P , log M / log log M }). This shows that the competitive ratio of our algorithm is tight (up to constant factors) as a function of C , and not far from the best achievable as a function of P and M . Received February 6, 1997; revised November 17, 1997.  相似文献   

16.
《Ergonomics》2012,55(11):764-787
The study had two primary aims. The first aim was to combine a human resources costing and accounting approach (HRCA) with a quantitative statistical approach in order to get an integrated model. The second aim was to apply this integrated model in a quasi-experimental study in order to investigate whether preventive intervention affected sickness absence costs at the company level. The intervention studied contained occupational organizational measures, competence development, physical and psychosocial working environmental measures and individual and rehabilitation measures on both an individual and a group basis. The study is a quasi-experimental design with a non-randomized control group. Both groups involved cleaning jobs at predominantly female workplaces. The study plan involved carrying out before and after studies on both groups. The study included only those who were at the same workplace during the whole of the study period. In the HRCA model used here, the cost of sickness absence is the net difference between the costs, in the form of the value of the loss of production and the administrative cost, and the benefits in the form of lower labour costs. According to the HRCA model, the intervention used counteracted a rise in sickness absence costs at the company level, giving an average net effect of 266.5 Euros per person (full-time working) during an 8-month period. Using an analogue statistical analysis on the whole of the material, the contribution of the intervention counteracted a rise in sickness absence costs at the company level giving an average net effect of 283.2 Euros. Using a statistical method it was possible to study the regression coefficients in sub-groups and calculate the p-values for these coefficients; in the younger group the intervention gave a calculated net contribution of 605.6 Euros with a p-value of 0.073, while the intervention net contribution in the older group had a very high p-value. Using the statistical model it was also possible to study contributions of other variables and interactions. This study established that the HRCA model and the integrated model produced approximately the same monetary outcomes. The integrated model, however, allowed a deeper understanding of the various possible relationships and quantified the results with confidence intervals.  相似文献   

17.
Fast Approximate Energy Minimization with Label Costs   总被引:2,自引:0,他引:2  
The α-expansion algorithm has had a significant impact in computer vision due to its generality, effectiveness, and speed. It is commonly used to minimize energies that involve unary, pairwise, and specialized higher-order terms. Our main algorithmic contribution is an extension of α-expansion that also optimizes “label costs” with well-characterized optimality bounds. Label costs penalize a solution based on the set of labels that appear in it, for example by simply penalizing the number of labels in the solution.  相似文献   

18.
BURS theory provides a powerful mechanism to efficiently generate pattern matches in a given expression tree. BURS, which stands for bottom-up rewrite system, is based on term rewrite systems, to which costs are added. We formalise the underlying theory, and derive an algorithm that computes all pattern matches. This algorithm terminates if the term rewrite system is finite. We couple this algorithm with the well-known search algorithm A that carries out pattern selection. The search algorithm is directed by a cost heuristic that estimates the minimum cost of code that has yet to be generated. The advantage of using a search algorithm is that we need to compute only those costs that may be part of an optimal rewrite sequence (and not the costs of all possible rewrite sequences as in dynamic programming). A system that implements the algorithms presented in this work has been built. Received: 20 November 1995 / 26 June 1996  相似文献   

19.
This research explores the interaction of textual and photographic information in image understanding. Specifically, it presents a computational model whereby textual captions are used as collateral information in the interpretation of the corresponding photographs. The final understanding of the picture and caption reflects a consolidation of the information obtained from each of the two sources and can thus be used in intelligent information retrieval tasks. The problem of building a general-purpose computer vision system withouta priori knowledge is very difficult at best. The concept of using collateral information in scene understanding has been explored in systems that use general scene context in the task of object identification. The work described here extends this notion by incorporating picture specific information. A multi-stage systemPICTION which uses captions to identify humans in an accompanying photograph is described. This provides a computationally less expensive alternative to traditional methods of face recognition. A key component of the system is the utilisation of spatial and characteristic constraints (derived from the caption) in labeling face candidates (generated by a face locator).This work was supported in part by ARPA Contract 93-F148900-000. I would like to thank William Rapaport for serving as my advisor in my doctoral work; Venu Govindaraju for his work on the face locator; and more recently, Rajiv Chopra, Debra Burhans and Toshio Morita for their work in the new implementation of PICTION as well as valuable feedback.  相似文献   

20.
The multiprocessor scheduling scheme NPS-F for sporadic tasks has a high utilisation bound and an overall number of preemptions bounded at design time. NPS-F binpacks tasks offline to as many servers as needed. At runtime, the scheduler ensures that each server is mapped to at most one of the m processors, at any instant. When scheduled, servers use EDF to select which of their tasks to run. Yet, unlike the overall number of preemptions, the migrations per se are not tightly bounded. Moreover, we cannot know a priori which task a server will be currently executing at the instant when it migrates. This uncertainty complicates the estimation of cache-related preemption and migration costs (CPMD), potentially resulting in their overestimation. Therefore, to simplify the CPMD estimation, we propose an amended bin-packing scheme for NPS-F allowing us (i) to identify at design time, which task migrates at which instant and (ii) bound a priori the number of migrating tasks, while preserving the utilisation bound of NPS-F.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号