首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
The main question we are going to analyze is: “What is the best way to put together an informative, comprehensive and fair comparative test of scanners?” A common procedure is to take a set of known samples and compare the detection rates. Unfortunately, such tests only check the reactive abilities of scanners because the usual test sets are comprised of only known viruses.Due to the increased connectivity provided by the Internet virus outbreaks can occur and spread much more quickly Viruses involved are always new. For scanners to provide protection they have to be able to proactively detect new viruses. Methods that can be used to measure proactive scanners’ capabilities (generics and heuristics) are presented and compared.We use the Monte Carlo method to analyze a common test situation. Computer simulations show that to achieve higher fairness, test sets ought to be expanded as much as possible. When the test set is small the very process of random sample selection for the test set sorts scanners into “lucky” and “unlucky” categories (we called this effect a “random pick” problem). We also discuss ranking threats as a way of sorting the threats in the test set. The ways to rank the threats and the drawbacks of “Wildlist”—based ranking are discussed.This paper is an attempt to give you an insight into what limitations comparative tests have. It gives several recommendations to the testing bodies on how to improve the quality of tests and make them more suitable to the current situation. It should assist you in understanding the comparative tests.

Current virus and malware threat

The current situation in the computer security differs significantly from the situation we had just a few years ago. A long time ago, virus propagation was mainly via exchange of programs on floppy disks. It was possible to release AV updates monthly. Then file exchange via LANs and servers stepped in. Many AV companies started issuing updates via the Web pages. Now we see an Internet—connected world and deep penetration of relatively insecure operating systems and applications. Updates are now predominantly delivered automatically through the Internet. To fight outbreaks it became necessary to prepare and deliver updates almost instantly and push them out very quickly. All in all — the nature of the threat and the AV scanners have changed dramatically. Are current testing methods coping with these rapid changes?The number of viruses has increased from about 5000 in 1995 to about 30000 DOS viruses and 30000 modern threats today. Modern threats include backdoor trojans, network-aware worms, mass-mailers, DDoS (distributed denial-of-service) zombies, etc. The number of AV programs has also multiplied. In the past there were only DOS scanners. Now we have many operating systems only in “Wintel” space. We also have AV scanners for gateways, firewalls, groupware, on-line scanners, non—“Wintel” scanners and so on. We have hundreds of OS+AV combinations. It is obvious that comparing the quality of AV software has become a very complex task indeed.Fortunately, the detection capabilities of major AV products are usually embodied in their scanning engines (the virus database is meant to be part of the engine here). The engine is then plugged into many different AV products. Now, if the maximum detection capabilities of the engines are compared we would have a reference point — no ports of any engine to any product should achieve better results then the engine itself. The strength of the bare engine also gives indicates the strength of the research team involved. However, it must be clear that it is possible to have worse detection in an AV product as compared with the bare engine. The reason would be improper engine porting, compatibility, interfacing issues or environment limitations.Therefore we need tests to compare point products plus the tests to measure the maximum detection capabilities of the scanning engines. Currently, the setting up of such tests are not easy and require a lot of brains, time and money. That is why these days decent tests are only run by the bodies that are able to find the necessary resources: big universities (free manpower and state funding) or commercial certification bodies (these take fees from AV companies who, in turn, charge their users). In general, magazines cannot do a decent comparative tests on its their own any more. Exceptions to this include “Virus Bulletin” and “Secure Computing”. The former is hosted by “Sophos” (a British AV company) while the latter performs fee-based certifications. Other magazines simply do not have adequate resources and so they either republish the results from big testing bodies or their test results are plain ridiculous (say, when they test a handful of threats). It also has to be said that a lot of knowledge is required to put together a decent comparative test. Specialists in this area are rare and expensive.In the past a scanner finding 100% of viruses known one month ago was considered perfect because most of the troubles were coming from known viruses and the speed of the propagation was slow. This could have easily been tested. These days to be perfect the scanner needs to catch a virus that will cause the next global outbreak! Can this be tested at all? To some extent — yes, but instead of testing the detection rate against known viruses such a test should analyze scanners’ performance against unknown threats! However, in both cases the users want the answer to one and the same question —“Which AV product would protect me better?” Only the way to find out the answer is different today.In this paper, when talking about comparing scanners we intentionally step away from anything but detection and cleaning issues. Things like GUI interface and functionality (quarantining, scheduling, updating, etc.) are deliberately avoided because the prime goal of all scanners is to find and remove viruses. And the ability to find and clean is down to the scanner’s core — the scanning engine and the database of virus’ definitions. We also do not discuss the speed of updates in an outbreak situation but not because it is unimportant!Let us return here to my claim that most magazines’ own tests are primitive and cannot generally be trusted. Due to lack of resources such tests are usually done on a ridiculously small set of samples (like 10 or 20). It is intuitively clear that for a small test set the results may not represent real detection rates. Let us analyze why.

“Random pick” problem for two scanners

Let us imagine a small simple test: we have 10 viruses (let us call them A…J) in total and two scanners. We suppose both scanners are equally good and detect exactly nine viruses out of ten. Furthermore, the detected virus is not the same for both scanners (see Figure 1):  相似文献   

2.
John S. Gero  Gregory J. Smith   《Knowledge》2009,22(8):600-609
The terms “context” and “situation” are often used interchangeably or to denote a variety of concepts. This paper aims to show that these are two different but related concepts and it reifies their difference within the framework of design agents. The external world of an agent is described as the aggregation of all entities that the agent could possibly sense or effect, where context is from its external world that an agent interacts with and is aware of. The interpreted world of an agent is described in terms of the experiences of that agent, where situations are processes that direct how interactive experiences proceed. Situations determine what part of the external world are in the current context, and situations influence interaction and so influence what and how common ground is acquired.  相似文献   

3.
Newer approaches for modelling travel behaviour require a new approach to integrated spatial economic modelling. Travel behaviour modelling is increasingly disaggregate, econometric, dynamic, and behavioural. A fully dynamic approach to urban system modelling is described, where interactions are characterized as two agents interacting through discrete events labelled as “offer” or “accept”. This leads to a natural partition of an integrated urban model into submodels based on the category of what is being exchanged, the type of agent, and the time and place of interaction.Where prices (or price-like signals such as congested travel times) exist to stimulate supply and/or to suppress demand, the dynamic change in prices can be represented either behaviourally, as individual agents adjust their expectations in response to their personal history and the history of the modelled region, or with an “auctioneer” from micro-economic theory, who adjusts average prices. When no auctioneers are used, the modelling system can use completely continuous representations of both time and space.Two examples are shown. The first is a demonstration of a continuous-time continuous-space transaction simulation with simple agents representing businesses and households. The second shows how an existing model—the Oregon TLUMIP project for statewide land-use and transport modelling—can be adapted into the paradigm.  相似文献   

4.
5.
卢勇  左志宏 《微机发展》2007,17(3):172-175
传染是计算机病毒的主要特征之一,病毒的传染不仅提高了病毒的存活率,而且对计算机系统资源造成破坏和威胁。为了分析计算机病毒在计算机内的微观传染规律,在分析病毒传染机制的基础上,结合当前操作系统的特点,建立和分析了计算机病毒在单个计算机系统内的随机传染模型。得出结论:在单进程操作系统环境下,病毒的感染数量呈线性增长,感染强度相对稳定;在多进程操作系统环境下,病毒的感染数量和感染强度都呈e的指数级增长。最后提出了反病毒传染技术的发展趋势。  相似文献   

6.
We present a system for rapidly and easily building instructable and self-adaptive software agents that retrieve and extract information. Our Wisconsin Adaptive Web Assistant (WAWA) constructs intelligent agents by accepting user preferences in the form of instructions. These user-provided instructions are compiled into neural networks that are responsible for the adaptive capabilities of an intelligent agent. The agent’s neural networks are modified via user-provided and system-constructed training examples. Users can create training examples by rating Web pages (or documents), but more importantly WAWA’s agents uses techniques from reinforcement learning to internally create their own examples. Users can also provide additional instruction throughout the life of an agent. Our experimental evaluations on a ‘home-page finder’ agent and a ‘seminar-announcement extractor’ agent illustrate the value of using instructable and adaptive agents for retrieving and extracting information.  相似文献   

7.
8.
This paper proposes an algorithm for the model based design of a distributed protocol for fault detection and diagnosis for very large systems. The overall process is modeled as different Time Petri Net (TPN) models (each one modeling a local process) that interact with each other via guarded transitions that becomes enabled only when certain conditions (expressed as predicates over the marking of some places) are satisfied (the guard is true). In order to use this broad class of time DES models for fault detection and diagnosis we derive in this paper the timing analysis of the TPN models with guarded transitions. In this paper we also extend the modeling capability of the faults calling some transitions faulty when operations they represent take more or less time than a prescribed time interval corresponding to their normal execution. We consider here that different local agents receive local observation as well as messages from neighboring agents. Each agent estimates the state of the part of the overall process for which it has model and from which it observes events by reconciling observations with model based predictions. We design algorithms that use limited information exchange between agents and that can quickly decide “questions” about “whether and where a fault occurred?” and “whether or not some components of the local processes have operated correctly?”. The algorithms we derive allow each local agent to generate a preliminary diagnosis prior to any communication and we show that after communicating the agents we design recover the global diagnosis that a centralized agent would have derived. The algorithms are component oriented leading to efficiency in computation.  相似文献   

9.
We study the complexity of a multilateral negotiation framework, where autonomous agents agree on a sequence of deals to exchange sets of discrete resources in order to both further their own goals and to achieve a distribution of resources that is socially optimal. When analysing such a framework, we can distinguish different aspects of complexity: How many deals are required to reach an optimal allocation of resources? How many communicative exchanges are required to agree on one such deal? How complex a communication language do we require? And finally, how complex is the reasoning task faced by each agent?“This revised version was published online in June 2005 with corrections to the capitalization of authors’ names.”  相似文献   

10.
The speed and convenience of the Internet has facilitated dynamic development in electronic commerce in recent years. E-commerce technologies and applications are widely studied by expert researchers. Mobile agent is considered to have high potential in e-commerce; it has been attracting wide attention in recent years. Mobile agent has high autonomy and mobility; it can move unbridled in different runtime environments carrying out assigned tasks while automatically detecting its current environment and responding accordingly. The above qualities make mobile agent very suitable for use in e-commerce. The Internet is an open environment, but transfer of confidential data should be conducted only over a secure environment. So, to transfer information over the Internet, a secure Internet environment is absolutely essential. Therefore, the security of present Internet environment must be improved. During its execution, a mobile agent needs to roam around on the Internet between different servers, and it may come in contact with other mobile agents or hosts; it may also need to interact with them. So, a mobile agent might come to harm when it meets a malicious host, and the confidentiality of data could also be compromised. To tackle the above problems, this paper proposes a security scheme for mobile agents. It is designed to ensure the safety of mobile agents on the Internet, and it also has access control and key management to ensure security and data confidentiality. Volker and Mehrdad [R. Volker, J.S. Mehrdad, Access Control and Key Management for Mobile Agents, “Computer Graphics”, Vol. 22, No. 4, August 1998, pp. 457–461] have already proposed an access control and key management scheme for mobile agents, but it needs large amount of space. So, this paper proposes a new scheme that uses the concepts of Chinese Remainder Theorem [F.H. Kuo, V.R.L. Shen, T.S. Chen, F. Lai, A Cryptographic Key Assignment Scheme for Dynamic Access Control in a User Hierarchy, “IEE Proceeding on Computers & Digital Techniques”, Vol. 146, No. 5, Sept. 1999, pp. 235–240., T.S. Chen, Y.F. Chung, Hierarchical Access Control Based on Chinese Remainder Theorem and Symmetric Algorithm, ”Computers & Security”, Vol. 21, No. 6, 2002, pp. 565–570., U.P. Lei, S.C. Wang, A Study of the Security of Mambo et al.'s Proxy Signature Scheme Based on the Discrete Logarithm Problem, June 2004], hierarchical structure and Superkey [S.G. Akl, P.D. Taylor, Cryptographic Solution to a Problem of Access Control in a Hierarchy, “ACM Transactions on Computer Systems”, Vol. 1, No. 3, August 1983, pp. 239–248]. A security and performance analysis of the proposed scheme shows that the scheme effectively protects mobile agents.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号