共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a novel logic-based framework to automate multi-issue bilateral negotiation in e-commerce settings. The approach
exploits logic as communication language among agents, and optimization techniques in order to find Pareto-efficient agreements.
We introduce , a propositional logic extended with concrete domains, which allows one to model relations among issues (both numerical and
non-numerical ones) via logical entailment, differently from well-known approaches that describe issues as uncorrelated. Through
it is possible to represent buyer’s request, seller’s supply and their respective preferences as formulas endowed with a
formal semantics, e.g., “ if I spend more than 30000 € for a sedan then I want more than a two-years warranty and a GPS system included”. We mix logic and utility theory in order to express preferences in a qualitative and quantitative way. We illustrate the
theoretical framework, the logical language, the one-shot negotiation protocol we adopt, and show we are able to compute Pareto-efficient
outcomes, using a mediator to solve an optimization problem. We prove the computational adequacy of our method by studying
the complexity of the problem of finding Pareto-efficient solutions in our setting. 相似文献
2.
An automated negotiator is an intelligent agent whose task is to reach the best possible agreement. We explore a novel approach to developing a negotiation strategy, a ‘domain-based approach’. Specifically, we use two domain parameters, reservation value and discount factor, to cluster the domain into different regions, in each of which we employ a heuristic strategy based on the notions of temporal flexibility and bargaining strength. Following the presentation of our cognitive and formal models, we show in an extensive experimental study that an agent based on that approach wins against the top agents of the automated negotiation competition of 2012 and 2013, and attained the second place in 2014. 相似文献
3.
In this digital day and age, we are becoming increasingly dependent on multimedia content, especially digital images and videos, to provide a reliable proof of occurrence of events. However, the availability of several sophisticated yet easy-to-use content editing software has led to great concern regarding the trustworthiness of such content. Consequently, over the past few years, visual media forensics has emerged as an indispensable research field, which basically deals with development of tools and techniques that help determine whether or not the digital content under consideration is authentic, i.e., an actual, unaltered representation of reality. Over the last two decades, this research field has demonstrated tremendous growth and innovation. This paper presents a comprehensive and scrutinizing bibliography addressing the published literature in the field of passive-blind video content authentication, with primary focus on forgery/tamper detection, video re-capture and phylogeny detection, and video anti-forensics and counter anti-forensics. Moreover, the paper intimately analyzes the research gaps found in the literature, provides worthy insight into the areas, where the contemporary research is lacking, and suggests certain courses of action that could assist developers and future researchers explore new avenues in the domain of video forensics. Our objective is to provide an overview suitable for both the researchers and practitioners already working in the field of digital video forensics, and for those researchers and general enthusiasts who are new to this field and are not yet completely equipped to assimilate the detailed and complicated technical aspects of video forensics. 相似文献
5.
Multimedia Tools and Applications - Real-time detection of humans is an evolutionary research topic. It is an essential and prominent component of various vision based applications. Detection of... 相似文献
6.
Explosive growth of big data demands efficient and fast algorithms for nearest neighbor search. Deep learning-based hashing methods have proved their efficacy to learn advanced hash functions that suit the desired goal of nearest neighbor search in large image-based data-sets. In this work, we present a comprehensive review of different deep learning-based supervised hashing methods particularly for image data-sets suggested by various researchers till date to generate advanced hash functions. We categorize prior works into a five-tier taxonomy based on: (i) the design of network architecture, (ii) training strategy based on nature of data-set, (iii) the type of loss function, (iv) the similarity measure and, (v) the nature of quantization. Further, different data-sets used in prior works are reported and compared based on various challenges in the characteristics of images that are part of the data-sets. Lastly, different future directions such as incremental hashing, cross-modality hashing and guidelines to improve design of hash functions are discussed. Based on our comparative review, it has been observed that generative adversarial networks-based hashing models outperform other methods. This is due to the fact that they leverage more data in the form of both real world and synthetically generated data. Furthermore, it has been perceived that triplet-loss-based loss functions learn better discriminative representations by pushing similar patterns together and dis-similar patterns away from each other. This study and its observations shall be useful for the researchers and practitioners working in this emerging research field. 相似文献
7.
Visual modeling languages such as the Business Process Model and Notation and the Unified Modeling Language are widely used in industry and academia for the analysis and design of information systems. Such modeling languages are usually introduced in overarching specifications which are maintained by standardization institutions such as the Object Management Group or the Open Group. Being the primary – often the single – source of information, such specifications are of paramount importance for modelers, researchers, and tool vendors. However, structure, content, and specification techniques of such documents have never been systematically analyzed. This paper addresses this gap by reporting on a Systematic Literature Review aimed to analyze published standard modeling language specifications. In total, eleven specifications were found and comprehensively analyzed. The survey reveals heterogeneity in: (i) the modeling language concepts being specified, and (ii) the techniques being employed for the specification of these concepts. The identified specification techniques are analyzed and presented by referring to their utilization in the specifications. This survey provides a foundation for research aiming to increase consistency and improve comprehensiveness of information systems modeling languages. 相似文献
8.
A major characteristic regarding developments in the broad field of artificial intelligence (AI) during the 1990s has been an increasing integration of AI with other disciplines. A number of other computer science fields and technologies have been used in developing intelligent systems, starting from traditional information systems and databases, to modern distributed systems and the Internet. This paper surveys the knowledge modeling techniques that have received most attention in recent years among developers of intelligent systems, AI practitioners and researchers. The techniques are described from two perspectives, theoretical and practical. Hence the first part of the paper presents major theoretical and architectural concepts, design approaches, and research issues. The second part deals with several practical systems, applications, and ongoing projects that use and implement the techniques described in the first part. 相似文献
9.
Super-resolution, the process of obtaining one or more high-resolution images from one or more low-resolution observations, has been a very attractive research topic over the last two decades. It has found practical applications in many real-world problems in different fields, from satellite and aerial imaging to medical image processing, to facial image analysis, text image analysis, sign and number plates reading, and biometrics recognition, to name a few. This has resulted in many research papers, each developing a new super-resolution algorithm for a specific purpose. The current comprehensive survey provides an overview of most of these published works by grouping them in a broad taxonomy. For each of the groups in the taxonomy, the basic concepts of the algorithms are first explained and then the paths through which each of these groups have evolved are given in detail, by mentioning the contributions of different authors to the basic concepts of each group. Furthermore, common issues in super-resolution algorithms, such as imaging models and registration algorithms, optimization of the cost functions employed, dealing with color information, improvement factors, assessment of super-resolution algorithms, and the most commonly employed databases are discussed. 相似文献
10.
As 3-D modeling applications transition from engineering environments into the hands of artists, designers, and the consumer market, there is an increasing demand for more intuitive interfaces. In response, 3-D modeling and interface design communities have begun to develop systems based on traditional artistic techniques, particularly sketching. Collectively this growing field of research has come to be known as sketch-based modeling, however the name belies a diversity of promising techniques and unique approaches. This paper presents a survey of current research in sketch-based modeling, including a basic introduction to the topic, the challenges of sketch-based input, and an examination of a number of popular approaches, including representative examples and a general analysis of the benefits and challenges inherent to each. 相似文献
11.
针对云计算环境带来的安全性问题,在目前云安全模型研究的基础上,对分层的云服务框架模型进行了安全性分析.综合考虑云计算环境特点,在不影响云服务质量的前提下保证数据安全,建立了一个云安全访问控制模型ACCP.该模型利用自动信任协商机制可以不依靠数据中心第三方安全服务,通过双方信任证集的交互和策略的控制自适应地建立组合安全域.通过在用户-服务以及组合服务之间两个场景下信任协商建立过程,表明了模型可行性和有效性. 相似文献
12.
The ability of agents to learn is of growing importance in multi-agent systems. It is considered essential to improve the quality of peer to peer negotiation in these systems. This paper reviews various aspects of agent learning, and presents the particular learning approach—Bayesian learning—adopted in the MASCOT system (multi-agent system for construction claims negotiation). The core objective of the MASCOT system is to facilitate construction claims negotiation among different project participants. Agent learning is an integral part of the negotiation mechanism. The paper demonstrates that the ability to learn greatly enhances agents' negotiation power, and speeds up the rate of convergence between agents. In this case, learning is essential for the success of peer to peer agent negotiation systems. 相似文献
13.
In geometric modeling, a model is built by specifying relations between geometric entities, which are to be maintained by the modeling system. Many relations can be specified declaratively as geometric constraints on these entities. Constraint satisfaction techniques are used for validation of the geometric model. This article presents an overview of general constraint satisfaction techniques, both for finite domain and infinite domain constraint satisfaction problems. Specific satisfaction techniques for geometric constraints get special attention. Furthermore, the article presents concepts from constraint programming, concerning the integration of constraint specification and programming languages. 相似文献
14.
International Journal of Information Security - The number of people using mobile devices is increasing as mobile devices offer different features and services. Many mobile users install various... 相似文献
15.
The bloom of electronic commerce has changed the whole outlook of traditional trading behavior. Through the Internet, different business entities can now easily interact with each other and then have their transactions within a minimum time. However, the advanced hardware facilities for Internet infrastructure and the entrancing Web sites are not the only decisive factors to guarantee a successful business in the electronic market. To enable the transactions, the problems that ensue during the complicated activities in electronic commerce have to be resolved. This paper presents an agent-based system to support two types of activities most related to the decision-making process in the Internet commerce. The first part of our system is for interactive recommendation; and the second part, for automated negotiation. Experiments have been conducted to evaluate their corresponding performance and the results show the efficiency of the proposed system and its potential toward the future electronic market. 相似文献
16.
Learning classifier systems (LCSs) are rule- based systems that automatically build their ruleset. At the origin of Holland’s
work, LCSs were seen as a model of the emergence of cognitive abilities thanks to adaptive mechanisms, particularly evolutionary
processes. After a renewal of the field more focused on learning, LCSs are now considered as sequential decision problem-solving
systems endowed with a generalization property. Indeed, from a Reinforcement Learning point of view, LCSs can be seen as learning
systems building a compact representation of their problem thanks to generalization. More recently, LCSs have proved efficient
at solving automatic classification tasks. The aim of the present contribution is to describe the state-of- the-art of LCSs,
emphasizing recent developments, and focusing more on the sequential decision domain than on automatic classification. 相似文献
17.
In recent trends, artificial intelligence (AI) is used for the creation of complex automated control systems. Still, researchers are trying to make a completely autonomous system that resembles human beings. Researchers working in AI think that there is a strong connection present between the learning pattern of human and AI. They have analyzed that machine learning (ML) algorithms can effectively make self-learning systems. ML algorithms are a sub-field of AI in which reinforcement learning (RL) is the only available methodology that resembles the learning mechanism of the human brain. Therefore, RL must take a key role in the creation of autonomous robotic systems. In recent years, RL has been applied on many platforms of the robotic systems like an air-based, under-water, land-based, etc., and got a lot of success in solving complex tasks. In this paper, a brief overview of the application of reinforcement algorithms in robotic science is presented. This survey offered a comprehensive review based on segments as (1) development of RL (2) types of RL algorithm like; Actor-Critic, DeepRL, multi-agent RL and Human-centered algorithm (3) various applications of RL in robotics based on their usage platforms such as land-based, water-based and air-based, (4) RL algorithms/mechanism used in robotic applications. Finally, an open discussion is provided that potentially raises a range of future research directions in robotics. The objective of this survey is to present a guidance point for future research in a more meaningful direction. 相似文献
18.
Artificial Intelligence Review - Multi-Focus Image Fusion (MFIF) is a method that combines two or more source images to obtain a single image which is focused, has improved quality and more... 相似文献
19.
Multimedia Tools and Applications - This article presents a detailed discussion of different prospects of digital image watermarking. This discussion of watermarking included: brief comparison of... 相似文献
20.
Artificial Intelligence Review - The Web is a source of information for Location-Based Service (LBS) applications. These applications lack postal addresses for the user’s Point of Interests... 相似文献
|