首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
The threat of cyber attacks motivates the need to monitor Internet traffic data for potentially abnormal behavior. Due to the enormous volumes of such data, statistical process monitoring tools, such as those traditionally used on data in the product manufacturing arena, are inadequate. “Exotic” data may indicate a potential attack; detecting such data requires a characterization of “typical” data. We devise some new graphical displays, including a “skyline plot,” that permit ready visual identification of unusual Internet traffic patterns in “streaming” data, and use appropriate statistical measures to help identify potential cyberattacks. These methods are illustrated on a moderate-sized data set (135,605 records) collected at George Mason University.  相似文献   

2.
3.
We show that the negative feedback interconnection of two causal, stable, linear time-invariant systems, with a “mixed” small gain and passivity property, is guaranteed to be finite-gain stable. This “mixed” small gain and passivity property refers to the characteristic that, at a particular frequency, systems in the feedback interconnection are either both “input and output strictly passive”; or both have “gain less than one”; or are both “input and output strictly passive” and simultaneously both have “gain less than one”. The “mixed” small gain and passivity property is described mathematically using the notion of dissipativity of systems, and finite-gain stability of the interconnection is proven via a stability result for dissipative interconnected systems.  相似文献   

4.
“Walkthrough” and “Jogthrough” techniques are well known expert based methodologies for the evaluation of user interface design. In this paper we describe the use of “Graphical” Jogthrough method for evaluating the interface design of the Network Simulator, an educational simulation program that enables users to virtually build a computer network, install hardware and software components, make the necessary settings and test the functionality of the network. Graphical Jogthrough is a further modification of a typical Jogthrough method, where evaluators' ratings produce evidence in the form of a graph, presenting estimated proportion of users who effectively use the interface versus the time they had to work with it in order to succeed effectiveness. We comment on the question: “What are the possible benefits and limitations of the Graphical Jogthrough method when applied in the case of educational software interface design?” We present the results of the evaluation session, and concluding from our experience we argue that the method could offer designers quantitative and qualitative data for formulating a useful (though rough in some aspects) estimation about the novice–becoming–expert pace that end users might follow when working with the evaluated interface.  相似文献   

5.
This paper explores the Cyber-Psychological and Cyber-Geographic aspects of hacking and hacktivism. An examination of the literature related to hackers and hacking reveals a complex nexus of spatial (including cyber-spatial such as “Notopia”) and psychological aspects of hacking, from which emerges a central question of how humans perceive and manipulate their cyber-identities. Concealing (real and cyber) identities is typical in hacking. With our progressive acculturation with identity-less and place-less modes of existence, our cyber-identities through time may be studied from within John Locke’s criterion of “memory” and the spatial-geographical criterion of identity.  相似文献   

6.
This study considers those questions posed by students during e-mail “tutorials” to elicit information from “guest lecturers” and the use of that information by students in their essays. The “tutorials” were conducted for students in the U.K. by a “guest lecturer” in France. The “guest lecturer” was accredited as a tutor on the module for which the students were enrolled, and participated in the module by the provision of lecture notes prior to the e-mail tutorials. Data for the study, drawn from a comparative education assignment set for undergraduate students enrolled on the module, comprised surveys of students' perceived IT capabilities and attitudes towards IT, analyses of students' questions and analyses of students' essays. The findings of the study indicate (1) that tutees tend to pose questions to elicit information or clarification rather than to elicit the viewpoint or opinions of the “guest lecturer” and (2) that two-thirds of tutees' essays cited information elicited from the “guest lecturer”.  相似文献   

7.
We present a new foreign-function interface for SML/NJ. It is based on the idea of data-level interoperability—the ability of ML programs to inspect as well as manipulate C data structures directly.The core component of this work is an encoding of the almost2 complete C type system in ML types. The encoding makes extensive use of a “folklore” typing trick, taking advantage of ML's polymorphism, its type constructors, its abstraction mechanisms, and even functors. A small low-level component which deals with C struct and union declarations as well as program linkage is hidden from the programmer's eye by a simple program-generator tool that translates C declarations to corresponding ML glue code.  相似文献   

8.
We present a calculus for modelling “environment-aware” computations, that is computations that adapt their behaviour according to the capabilities of the environment. The calculus is an imperative, object-based language with extensible objects, equipped with a labelled transition semantics. A notion of bisimulation, lifting to computations a correspondence between the capabilities of different environments, is provided. Bisimulation can be used to prove that a program is “cross-environment”, i.e., it has the same behaviour when run in different environments.  相似文献   

9.
This paper describes the information system used to search for a potential matrimonial partner. The search is based on comparison of the subject's record, which consists of his/her answers to about 400 items of a specially designed questionnaire, to the records of the potential partners. The basic principle of the system is representation of the set of candidates for the client with psychological warnings about potential “conflict zones” in relationships between client and candidate rather than a ranking of candidates based on hypothetical “psychological compatibility” indices.  相似文献   

10.
Environmental modelling is done more and more by practising ecologists rather than computer scientists or mathematicians. This is because there is a broad spectrum of development tools available that allows graphical coding of complex models of dynamic systems and help to abstract from the mathematical issues of the modelled system and the related numerical problems for estimating solutions. In this contribution, we study how different modelling tools treat a test system, a highly non-linear predator–prey model, and how the numerical solutions vary. We can show that solutions (a) differ if different development tools are chosen but the same numerical procedure is selected; (b) depend on undocumented implementation details; (c) vary even for the same tool but for different versions; and (d) are generated but with no notifications on numerical problems even if these could be identified. We conclude that improved documentation of numeric methods used in the modelling software is essential to make sure that process based models formulated in terms of these modelling packages do not become “black box” models due to uncertainty in integration methods.  相似文献   

11.
Many of the problems addressed through engineering analysis include a set of regulatory (or other) probabilistic requirements that must be demonstrated with some degree of confidence through the analysis. Problems cast in this environment can pose new challenges for computational analyses in both model validation and model-based prediction. The “regulatory problems” given for the “Sandia challenge problems exercise”, while relatively simple, provide an opportunity to demonstrate methods that address these challenges. This paper describes and illustrates methods that can be useful in analysis of the regulatory problem. Specifically, we discuss:
(1) an approach for quantifying variability and uncertainty separately to assess the regulatory requirements and provide a statement of confidence; and
(2) a general validation metric to focus the validation process on a specific range of the predictive distribution (the predictions near the regulatory threshold).
These methods are illustrated using the challenge problems. Solutions are provided for both the static frame and structural dynamics problems.
Keywords: Regulatory problem; Calibration; Model validation; Model-based prediction  相似文献   

12.
Drawing upon nearly a decade of experience, I describe the challenges and advantages of teaching composition with the Internet at Howard University; I also explore the implications for other historically Black colleges and universities (HBCUs). First, I discuss the digital divide that has made it so difficult for many HBCU faculty members and students to access the Internet for composition courses. Next, I describe how students and I succeeded in harnessing the Internet not only to practice high-level writing skills but to “do cultural work”: to establish online “safe houses” for African American English, to collaborate with White North Americans and Black South Africans, and to publish Afrocentric material on the Web. In closing, I identify the pedagogical strategies that turned the Internet into a productive tool for the students in my writing courses.  相似文献   

13.
Whereas to most logicians, the word “theorem” refers to any statement which has been shown to be true, to mathematicians, the word “Theorem” is, relatively speaking, rarely applied, and denotes something far more special. In this paper, we examine some of the underlying reasons behind this difference in terminology, and we show how this discrepancy might be exploited, in order to build a computer system which automatically selects the latter type of “Theorems” from amongst the former. Indeed, we have begun building the automated discovery system MATHsAiD, the design of which is based upon our research. We provide some preliminary results produced by this system, and compare these results to Theorems appearing in various mathematics textbooks.  相似文献   

14.
An integrated learning object, a web-based inquiry environment “Young Scientist” for basic school level is introduced by applying the semiosphere conception for explaining learning processes. The study focused on the development of students’ (n = 30) awareness of the affordances of learning objects (LO) during the 3 inquiry tasks, and their ability of dynamically reconstructing meanings in the inquiry subtasks through exploiting these LO affordances in “Young Scientist”. The problem-solving data recorded by the inquiry system and the awareness questionnaire served as the data-collection methods.It was demonstrated that learners obtain complete awareness of the LO affordances in an integrated learning environment only after several problem-solving tasks. It was assumed that the perceived task-related properties and functions of LOs depend on students’ interrelations with LOs in specific learning contexts. Learners’ overall awareness of certain LO affordances, available in the inquiry system “Young Scientist”, developed with three kinds of patterns, describing the hierarchical development of the semiosphere model for learners. The better understanding of the LO affordances, characteristic to the formation of the functioning semiosphere, was significantly related to the advanced knowledge construction during these inquiry subtasks that presumed translation of information from one semiotic system to another. The implications of the research are discussed in the frames of the development of new contextual gateways for learning with virtual objects. It is assumed that effective LO-based learning has to be organized through pedagogically constrained gateways by manifesting certain LO affordances in the context in order to build up the dynamic semiosphere model for learners.  相似文献   

15.
We survey recent research into new techniques for artificially facilitating pointing at targets in graphical user interfaces. While pointing in the physical world is governed by Fitts’ law and constrained by physical laws, pointing in the virtual world does not necessarily have to abide by the same constraints, opening the possibility for “beating” Fitts’ law with the aid of the computer by artificially reducing the target distance, increasing the target width, or both. The survey suggests that while the techniques developed to date are promising, particularly when applied to the selection of single isolated targets, many of them do not scale well to the common situation in graphical user interfaces where multiple targets are located in close proximity.  相似文献   

16.
Rush Hour is a children's game that consists of a grid board, several cars that are restricted to move either vertically or horizontally (but not both), a special target car, and a single exit on the perimeter of the grid. The goal of the game is to find a sequence of legal moves that allows the target car to exit the grid. We consider a slightly generalized version of the game that uses an n×n grid and assume that we can place the single exit and target car at any location we choose on initialization of the game.

In this work, we show that deciding if the target car can legally exit the grid is PSPACE-complete. Our constructive proof uses a lazy form of dual-rail reversible logic such that movement of “output” cars can only occur if logical combinations of “input” cars can also move. Emulating this logic only requires three types of devices (two switches and one crossover); thus, our proof technique can be easily generalized to other games and planning problems in which the same three primitive devices can be constructed.  相似文献   


17.
The main question we are going to analyze is: “What is the best way to put together an informative, comprehensive and fair comparative test of scanners?” A common procedure is to take a set of known samples and compare the detection rates. Unfortunately, such tests only check the reactive abilities of scanners because the usual test sets are comprised of only known viruses.Due to the increased connectivity provided by the Internet virus outbreaks can occur and spread much more quickly Viruses involved are always new. For scanners to provide protection they have to be able to proactively detect new viruses. Methods that can be used to measure proactive scanners’ capabilities (generics and heuristics) are presented and compared.We use the Monte Carlo method to analyze a common test situation. Computer simulations show that to achieve higher fairness, test sets ought to be expanded as much as possible. When the test set is small the very process of random sample selection for the test set sorts scanners into “lucky” and “unlucky” categories (we called this effect a “random pick” problem). We also discuss ranking threats as a way of sorting the threats in the test set. The ways to rank the threats and the drawbacks of “Wildlist”—based ranking are discussed.This paper is an attempt to give you an insight into what limitations comparative tests have. It gives several recommendations to the testing bodies on how to improve the quality of tests and make them more suitable to the current situation. It should assist you in understanding the comparative tests.

Current virus and malware threat

The current situation in the computer security differs significantly from the situation we had just a few years ago. A long time ago, virus propagation was mainly via exchange of programs on floppy disks. It was possible to release AV updates monthly. Then file exchange via LANs and servers stepped in. Many AV companies started issuing updates via the Web pages. Now we see an Internet—connected world and deep penetration of relatively insecure operating systems and applications. Updates are now predominantly delivered automatically through the Internet. To fight outbreaks it became necessary to prepare and deliver updates almost instantly and push them out very quickly. All in all — the nature of the threat and the AV scanners have changed dramatically. Are current testing methods coping with these rapid changes?The number of viruses has increased from about 5000 in 1995 to about 30000 DOS viruses and 30000 modern threats today. Modern threats include backdoor trojans, network-aware worms, mass-mailers, DDoS (distributed denial-of-service) zombies, etc. The number of AV programs has also multiplied. In the past there were only DOS scanners. Now we have many operating systems only in “Wintel” space. We also have AV scanners for gateways, firewalls, groupware, on-line scanners, non—“Wintel” scanners and so on. We have hundreds of OS+AV combinations. It is obvious that comparing the quality of AV software has become a very complex task indeed.Fortunately, the detection capabilities of major AV products are usually embodied in their scanning engines (the virus database is meant to be part of the engine here). The engine is then plugged into many different AV products. Now, if the maximum detection capabilities of the engines are compared we would have a reference point — no ports of any engine to any product should achieve better results then the engine itself. The strength of the bare engine also gives indicates the strength of the research team involved. However, it must be clear that it is possible to have worse detection in an AV product as compared with the bare engine. The reason would be improper engine porting, compatibility, interfacing issues or environment limitations.Therefore we need tests to compare point products plus the tests to measure the maximum detection capabilities of the scanning engines. Currently, the setting up of such tests are not easy and require a lot of brains, time and money. That is why these days decent tests are only run by the bodies that are able to find the necessary resources: big universities (free manpower and state funding) or commercial certification bodies (these take fees from AV companies who, in turn, charge their users). In general, magazines cannot do a decent comparative tests on its their own any more. Exceptions to this include “Virus Bulletin” and “Secure Computing”. The former is hosted by “Sophos” (a British AV company) while the latter performs fee-based certifications. Other magazines simply do not have adequate resources and so they either republish the results from big testing bodies or their test results are plain ridiculous (say, when they test a handful of threats). It also has to be said that a lot of knowledge is required to put together a decent comparative test. Specialists in this area are rare and expensive.In the past a scanner finding 100% of viruses known one month ago was considered perfect because most of the troubles were coming from known viruses and the speed of the propagation was slow. This could have easily been tested. These days to be perfect the scanner needs to catch a virus that will cause the next global outbreak! Can this be tested at all? To some extent — yes, but instead of testing the detection rate against known viruses such a test should analyze scanners’ performance against unknown threats! However, in both cases the users want the answer to one and the same question —“Which AV product would protect me better?” Only the way to find out the answer is different today.In this paper, when talking about comparing scanners we intentionally step away from anything but detection and cleaning issues. Things like GUI interface and functionality (quarantining, scheduling, updating, etc.) are deliberately avoided because the prime goal of all scanners is to find and remove viruses. And the ability to find and clean is down to the scanner’s core — the scanning engine and the database of virus’ definitions. We also do not discuss the speed of updates in an outbreak situation but not because it is unimportant!Let us return here to my claim that most magazines’ own tests are primitive and cannot generally be trusted. Due to lack of resources such tests are usually done on a ridiculously small set of samples (like 10 or 20). It is intuitively clear that for a small test set the results may not represent real detection rates. Let us analyze why.

“Random pick” problem for two scanners

Let us imagine a small simple test: we have 10 viruses (let us call them A…J) in total and two scanners. We suppose both scanners are equally good and detect exactly nine viruses out of ten. Furthermore, the detected virus is not the same for both scanners (see Figure 1):  相似文献   

18.
By collecting statistics over runtime executions of a program we can answer complex queries, such as “what is the average number of packet retransmissions” in a communication protocol, or “how often does process P1 enter the critical section while process P2 waits” in a mutual exclusion algorithm. We present an extension to linear-time temporal logic that combines the temporal specification with the collection of statistical data. By translating formulas of this language to alternating automata we obtain a simple and efficient query evaluation algorithm. We illustrate our approach with examples and experimental results.  相似文献   

19.
Efficient constrained local model fitting for non-rigid face alignment   总被引:1,自引:1,他引:0  
Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The “simultaneous” algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2–3 fps). The “project-out” algorithm for fitting an AAM achieves faster than real time performance (>200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the “simultaneous” AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the “exhaustive local search” (ELS) algorithm. Experiments were conducted on the CMU MultiPIE database.  相似文献   

20.
Josef Tomiska   《Calphad》2009,33(2):288-294
ExTherm 2” shows clear advances over ExTHERM as presented in [J. Tomiska, CALPHAD 26 (2002) 143–154]: All three parts have been improved in powerfulness, comfort, and interactive work. Especially the module cM3_ is now designed for interactive evaluation by means of an overall best fit technique applicable on experimental data from calorimetric and vapor pressure measurements as well as from measurements on the electromotive force (emf) on all types of metal alloy. The new data bank module cM1_(ETD/ ExP/ PhD) is an easy-to-handle tool for interactive work in many applications in physical chemistry. The data bank ETD has been enlarged by a series of new molar mixing properties of all types of metal alloy systems, and two sub-modules are added: The first tool, ExP, makes extrapolating binary data to a high number of ternary systems of all types of metal alloy possible. And the second tool, PhD, is designed for simple interactive computations on binary phase diagrams, especially for education items.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号