首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 461 毫秒
1.
In this paper we propose a fundamental approach to perform the class of Range and Nearest Neighbor (NN) queries, the core class of spatial queries used in location-based services, without revealing any location information about the query in order to preserve users’ private location information. The idea behind our approach is to utilize the power of one-way transformations to map the space of all objects and queries to another space and resolve spatial queries blindly in the transformed space. Traditional encryption based techniques, solutions based on the theory of private information retrieval, or the recently proposed anonymity and cloaking based approaches cannot provide stringent privacy guarantees without incurring costly computation and/or communication overhead. In contrast, we propose efficient algorithms to evaluate KNN and range queries privately in the Hilbert transformed space. We also propose a dual curve query resolution technique which further reduces the costs of performing range and KNN queries using a single Hilbert curve. We experimentally evaluate the performance of our proposed range and KNN query processing techniques and verify the strong level of privacy achieved with acceptable computation and communication overhead.  相似文献   

2.
ABSTRACT

In cluster-based key management techniques, the details of the mobile nodes are gathered always before joining or starting the clustering process,which produces congestion and additional overhead. In this paper, to reduce overhead and congestion of a cluster head, we propose a predictive clustering technique for effective key management. The predictive technique predicts the node movement and proactively sends information in cases of cluster movement. The combined metric for prediction is estimated based on route expiration time and node velocity. In key management technique, each cluster head retains the public key of its member nodes only and act as a router when dealing with nodes of other cluster members. Using this technique, the overhead on centralized key management schemes is reduced. Moreover, the need of each node storing all public keys is diminished, thus minimizing the storage overhead on each node. By simulation results, we show that the proposed scheme is more efficient for minimizing overhead and congestion.  相似文献   

3.
PurposeThe purpose of this paper is to investigate the impact of Supply Chain Information Integration (SCII) on the Operational Performance of manufacturing firms in Malaysia considering the role of information leakage.Design/methodology/approachTo test the model developed, we conducted an online questionnaire survey with Malaysian manufacturing companies drawn from the Federation of Malaysian Manufacturers directory of 2018. Out of the 400 questionnaires sent out to the manufacturing companies, 144 useable responses were received giving a response rate of 36 %. The data were analyzed using SmartPLS, a second-generation statistical tool.FindingsThe findings of this study showed that information quality, information security, and information technology (IT) had a positive effect on SCII with an explanatory power of 47.2 % while SCII, in turn, had a positive effect on operational performance explaining 17% of the variance. Intentional information leakage (IIL) moderated the relationship between SCII and operational performance, whereas accidental information leakage did not moderate the same relationship.Practical implicationsThis study provides insights into difficulties faced when implementing SCII, particularly by medium and large manufacturing companies in Malaysia. It helps identify appropriate strategies that can guide the management in its effort to improve performance by SCII.Originality/valueThis research is arguably the first study that simultaneously investigates the effect of information quality, IT, and information security on SCII and the moderating effect of information leakage on the relationships between SCII and operational performance. The results of this study indicate that information security has the largest impact on SCII, followed by IT, and information quality. Furthermore, IIL as a negative aspect of information integration may deprive the strength of the relationship between SCII and operational performance.  相似文献   

4.
Geographic routing protocols use location information when they need to route packets. In the meantime, location information are maintained by location-based services provided by network nodes in a distributed manner. Routing and location services are very related but are used separately. Therefore, the overhead of the location-based service is not considered when we evaluate the geographic routing overhead. Our aim is to combine routing protocols with location-based services in order to reduce communication establishment latency and routing overhead.  相似文献   

5.
ABSTRACT

Cloud computing is a new IT delivery paradigm that offers computing resources as on-demand services over the Internet. Like all forms of outsourcing, cloud computing raises serious concerns about the security of the data assets that are outsourced to providers of cloud services. To address these security concerns, we show how today's generation of information security management systems (ISMSs), as specified in the ISO/IEC 27001:2005, must be extended to address the transfer of security controls into cloud environments. The resulting virtual ISMS is a standards-compliant management approach for developing a sound control environment while supporting the various modalities of cloud computing.

This article addresses chief security and/or information officers of cloud client and cloud provider organizations. Cloud clients will benefit from our exposition of how to manage risk when corporate assets are outsourced to cloud providers. Providers of cloud services will learn what processes and controls they can offer in order to provide superior security that differentiates their offerings in the market.  相似文献   

6.
Much recent research has focused on applying Autonomic Computing principles to achieve constrained self-management in adaptive systems, through self-monitoring and analysis, strategy planning, and self adjustment. However, in a highly distributed system, just monitoring current operation and context is a complex and largely unsolved problem domain. This difficulty is particularly evident in the areas of network management, pervasive computing, and autonomic communications. This paper presents a model for the filtered dissemination of semantically enriched knowledge over a large loosely coupled network of distributed heterogeneous autonomic agents, removing the need to bind explicitly to all of the potential sources of that knowledge. This paper presents an implementation of such a knowledge delivery service, which enables the efficient routing of distributed heterogeneous knowledge to, and only to, nodes that have expressed an interest in that knowledge. This gathered knowledge can then be used as the operational or context information needed to analyze to the system's behavior as part of an autonomic control loop. As a case study this paper focuses on contextual knowledge distribution for autonomic network management. A comparative evaluation of the performance of the knowledge delivery service is also provided. John Keeney holds a BAI degree in Computer Engineering and a PhD in Computer Science from Trinity College Dublin. His primary interests are in controlling autonomic adaptable systems, particularly when those systems are distributed. David Lewis graduated in Electronics Engineering from the University of Southampton and gained his PhD in Computer Science from University College London. His areas of interest include integrated network and service management, distributed system engineering, adaptive and autonomic systems, semantic services and pervasive computing. Declan O’Sullivan was awarded his primary degree, MSc and PhD in Computer Science from Trinity College Dublin. He has a particular interest in the issues of semantic interoperability and heterogeneous information querying within a range of areas, primarily network and service management, autonomic management, and pervasive computing.  相似文献   

7.
孟凡生 《控制与决策》2015,30(4):764-768

按照双重成本控制标准对企业员工进行分类, 将企业和员工作为参与收益分配博弈的局中人, 认定他们在收益分配策略选择中均为理性人, 能够选择使自己利益最大化的策略. 根据企业在收益分配中具有的信息优势, 构建双重成本控制标准下的企业收益分配信号博弈模型, 提出博弈可能出现的4 种均衡结果和条件, 指出只有在员工确定其收益恰好等于预期, 且企业提供高支付方案时才会出现最有效率的分离均衡, 其他3 种结果都不是最有效率的均衡.

  相似文献   

8.
Abstract

This paper proposes a fair trading protocol. The fair trading protocol provides an overall solution for a trading process with offline anonymous credit card payments.

With the exploding growth of electronic commerce on the Internet, the issue of fairness1,2 is becoming increasingly more important. Fair exchange protocols have already been broadly used for applications such as electronic transactions,3,4 electronic mails,5,6and contract signing.7 Fairness is one of the critical issues in online transactions and related electronic payment systems. Many electronic payment systems have been proposed for providing different levels of security to financial transactions, such as iKP,8SET,9 NetBill,10 and NetCheque.11 In a normal electronic commerce transaction, there is always a payer and a payee to exchange money for goods or services. At least one financial institution, normally a bank, should be present in the payment system. The financial institution plays the role of issuer for the payer and the role of acquirer for the payee. An electronic payment system must enable an honest payer to convince the payee of a legitimate payment and prevent a dishonest payer from using other unsuitable behavior. At the same time, some additional security requirements may be addressed based on the nature of trading processes and trust assumptions of the system. Payer, payee, and the financial institution have different interests and the trust between two parties should be as little as possible. In electronic commerce, the payment happens over an open network, such as the Internet, and the issue of fairness must be carefully addressed. There is no fairness for involved parties in the existing popular payment protocols. One target of this article is to address the fairness issue in the credit card payment process. In the existing credit card protocols, the financial institution that provides the credit card service plays the role of online authority and is actively involved in a payment. To avoid the involvement of financial institutions in normal transactions and to reduce running costs, some credit card-based schemes with offline financial authority have been proposed.12 Another target of this article is to avoid the online financial institution for credit card service in normal transactions.  相似文献   

9.
公务员管理系统主要功能是实现考生报名、考生报名信息验核、考务安排、成绩处理及数据后续处理等各个环节信息化管理。详述了在公务员管理系统上构建网上支付平台,实现考生考务费的网上支付及报名数据的自动验核,真正实现系统数据管理的信息化、自动化,保证报名信息的有效性和准确性,使操作流程更加科学化。  相似文献   

10.
Assessing the economic feasibility of information systems (IS) projects and operations remains a challenge for most organizations. This research investigates lifecycle cost and benefit management practices and demonstrates that, overall, although organizations intend to improve their information technology (IT) management, they squander many opportunities to do so. There are inconsistencies in cost/benefit management practices. Most organizations that integrate operational benefits into investment analyses do not acknowledge operational costs. Planned project goals are seldom formulated in a verifiable or measurable way; there is little structured feedback on individual lifecycle activities, nor co-ordination of various activities. Thus, the attitude towards cost/benefit management appears primarily context-related and incident-driven. A further development of the system lifecycle-based approach is needed to improve IT cost/benefit management theory and practice, because a coherent set of methods is required to assess IT costs and benefits throughout the entire lifecycle.  相似文献   

11.
Abstract. We consider the problem of designing a minimum cost access network to carry traffic from a set of endnodes to a core network. Trunks are available in K types reflecting economies of scale . A trunk type with a high initial overhead cost has a low cost per unit bandwidth and a trunk type with a low overhead cost has a high cost per unit bandwidth. We formulate the problem as an integer program. We first use a primal—dual approach to obtain a solution whose cost is within O(K 2 ) of optimal. Typically the value of K is small. This is the first combinatorial algorithm with an approximation ratio that is polynomial in K and is independent of the network size and the total traffic to be carried. We also explore linear program rounding techniques and prove a better approximation ratio of O(K) . Both bounds are obtained under weak assumptions on the trunk costs. Our primal—dual algorithm is motivated by the work of Jain and Vazirani on facility location [7]. Our rounding algorithm is motivated by the facility location algorithm of Shmoys et al. [12].  相似文献   

12.
Abstract

Risk management is the process that allows business managers to balance operational and economic costs of protective measures and achieve gains in mission capability by protecting business processes that support the business objectives or mission of the enterprise. Senior management must ensure that the enterprise has the capabilities needed to accomplish its mission. Most organizations have tight budgets for security. To get the best bang for the security buck, management needs a process to determine spending.  相似文献   

13.
Abstract

Recovering from incidents in a timely appropriate manner is vital to maintaining operational efficiency, controlling costs, and keeping users happy and productive. A key to quick recovery is having an incident management process in place. An analysis of the incident management process can offer insight into how effectively the process supports problem detection and isolation and systems restoration. An analysis of the incident management process can contribute to systems and application resilience, the capacity to keep failures from dramatically affecting users.  相似文献   

14.
ContextAs the use of Domain-Specific Modeling Languages (DSMLs) continues to gain popularity, we have developed new ways to execute DSML models. The most popular approach is to execute code resulting from a model-to-code transformation. An alternative approach is to directly execute these models using a semantic-rich execution engine – Domain-Specific Virtual Machine (DSVM). The DSVM includes a middleware layer responsible for the delivery of services in a given domain.ObjectiveWe will investigate an approach that performs the dynamic combination of constructs in the middleware layer of DSVMs to support the delivery of domain-specific services. This middleware should provide: (a) a model of execution (MoE) that dynamically integrates decoupled domain-specific knowledge (DSK) for service delivery, (b) runtime adaptability based on context and available resources, and (c) the same level of operational assurance as any DSVM middleware.MethodOur approach will involve (1) defining a framework that supports the dynamic combination of MoE and DSK and (2) demonstrating the applicability of our framework in the DSVM middleware for user-centric communication. We will measure the overhead of our approach and provide a cost-benefit analysis factoring in its runtime adaptability using appropriate experimentation.ResultsOur experiments show that combining the DSK and MoE for a DSVM middleware allow us to realize efficient specialization while maintaining the required operability. We also show that the overhead introduced by adaptation is not necessarily deleterious to overall performance in a domain as it may result in more efficient operation selection.ConclusionThe approach defined for the DSVM middleware allows for greater flexibility in service delivery while reducing the complexity of application development for the user. These benefits are achieved at the expense of increased execution times, however this increase may be negligible depending on the domain.  相似文献   

15.
Abstract

Advice systems are defined as information systems which present users with both information and more subjective expert advice about complex and weakly structured domains. This paper presents a Generic Advice System Architecture (GASA) to assist in developing such systems. It describes how the architecture was used in developing the SPIRE system, whose aim is to provide advice and information which will assist the integration of students with disabilities into higher education. The paper discusses the way in which the GASA addresses key issues in the development of hypermedia advice systems including the need to make such systems available to users with little training and limited knowledge of the domain; the requirement to support diverse information exploration strategies; the provision of purely ‘point and select access; ’ and the minimisation of user disorientation and cognitive overhead.  相似文献   

16.
ABSTRACT

Care managers play a key role in coordinating care, especially for patients with chronic conditions. They use multiple health information technology (IT) applications in order to access, process, and communicate patient-related information. Using the work system model and its extension, the Systems Engineering Initiative for Patient Safety (SEIPS) model, we describe obstacles experienced by care managers in managing patient-related information. A web-based questionnaire was used to collect data from 80 care managers (61% response rate) located in clinics, hospitals, and a call center. Care managers were more likely to consider “inefficiencies in access to patient-related information” and “having to use multiple information systems” as major obstacles than “lack of computer training and support” and “inefficient use of case management software.” Care managers who reported “inefficient use of case management software” as an obstacle were more likely to report high workload. Future research should explore strategies used by care managers to address obstacles, and efforts should be targeted at improving the health information technologies used by care managers.  相似文献   

17.

Less-than-truckload (LTL) transportation offers fast, flexible and relatively low-cost transportation services to shippers. In order to cope with the effects of economic recessions, the LTL industry implemented ideas such as reducing excess capacity and increasing revenues through better yield management. In this paper, we extend these initiatives beyond the reach of individual carriers and propose a collaborative framework that facilitates load exchanges to reduce the operational costs. Even though collective solutions are proven to provide benefits to the participants by reducing the inefficiencies using a system-wide perspective, such solutions are often not attainable in real-life as the negotiating parties are seeking to maximize their individual profits rather than the overall profit and also they are unwilling to share confidential information. Therefore, a mechanism that enables collaboration among the carriers should account for the rationality of the individual participants and should require minimal information transfer between participants. Having this in mind, we propose a mechanism that facilities collaboration through a series of load exchange iterations and identifies an equilibrium among selfish carriers with limited information transfer among the participants. Our time-efficient mechanism can handle large instances with thousands of loads as well as provide significant benefits over the non-collaborative management of LTL networks.

  相似文献   

18.
The problem of minimization of costs for a health-care system is modelled as a two-person zero sum game. The game consists of the management of the system, first person, trying to minimize costs; while the devil, second person, decides the number of patients during a certain time interval in such a way as to maximize the costs. Naturo also plays a part in the game as it acids a certain amount of 'fuzzincss’ to the manager's prediction of the devil's number of patients.

Two types of services are available, intensive and normal care. They can be either increased or decreased at each of the two time intervals considered. At the beginning of each time interval the devil chooses the number of patients, nature gives a predetermined ‘fuzziness’ to the prediction, and management decides how to change the facilities.

A cost function or payoff is defined which accounts for initial expenditures, upkeeping costs, and penalties for making wrong decisions. Information collection schemes are defined for both players whenever a decision is made.

Using mixed strategies a total of 06.000 different numbers would have to be computed and stored. This would take an excessive amount of computer time, hence, behaviour strategies will be used. Now, instead of picking a strategy before the game starts, a decision is made at each time interval during the play of the game using all available information. This reduces the total amount of numbers to be computed to 170.

The problem is solved by using fictitious play with an assumed initial solution. Using dynamic programming this solution is updated until a certain desired accuracy is achieved. The results are checked by simulating the play of the game many times, using the generated behaviour strategies.  相似文献   

19.
The current representatives of Grid systems are Globus and Web services, however, they have poor scalability and single point failure. It is these two factors which make the building of an improved P2P and grid hybrid framework for resource management and task schedule such a popular research topic. This paper differs from current research because it puts forward an Information Pool Based Grid Architecture (IPBGA), which is a real sense hybrid of P2P and grid instead of only introducing P2P methods into grid systems for resource management. Based on virtualizations, abstract physical resources and tasks to be, the information requests from resources for tasks and appeals from tasks for resources are upgraded as information services by using an information pool protocol (IPP). Thus, grid resource management and task scheduling are regarded as information matching by IPP which is adaptive to the heterogeneous, dynamic, and distributed characteristics of a grid system. Tri-Information Center (Tri-IC) and source ranking mechanisms are presented in IPP to improve robustness, prevent sybil attack, and to discourage free riding. Experiments and theory analysis show that the IPP of the IPBGA is more efficient and robust in dealing with information while both the bandwidth and process costs are less.
Yi PanEmail:
  相似文献   

20.
Hamlett  N. 《IT Professional》2007,9(2):34-40
Enterprises outsource IT for many reasons, such as reducing costs, shedding of overhead functions that divert management attention away from the core business, and obtaining services from industry leaders specializing in the associated competencies. IT sourcing profoundly impacts the client organization's enterprise architecture. In an outstanding scenario, IT services come from an external services provider. Moreover, the client and vendor interact through an interorganizational interface containing operational, technical, and business components. This interface involves two major modes of interaction: delivery of the IT services and the service-management framework. The interface describes all of the processes, procedures, and protocols that the client and provider use to interact. Because enterprise architectures are unique to organizations, designing this interface might involve customization. For a large, complex client enterprise, bringing the business, operational, and technical components of the interface into alignment is often a nontrivial challenge  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号