首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
ABSTRACT

Misuse cases are currently used to identify safety and security threats and subsequently capture safety and security requirements. There is limited consensus to the precise meaning of the basic terminology used for use/misuse case concepts. This paper delves into the use of ontology for the formal representation of the use-misuse case domain knowledge for eliciting safety and security requirements. We classify misuse cases into different category to reflect different type of misusers. This will allow participants during the requirement engineering stage to have a common understanding of the problem domain. We enhanced the misuse case domain to include abusive misuse case and vulnerable use case in order to boost the elicitation of safety requirements. The proposed ontological approach will allow developer to share and reuse the knowledge represented in the ontology thereby avoiding ambiguity and inconsistency in capturing safety and security requirements. OWL protégé 3.3.1 editor was used for the ontology coding. An illustration of the use of the ontology is given with examples from the health care information system.  相似文献   

2.
Secure software development should begin at the early stages of the development life cycle. Misuse case modeling is a technique that stems from traditional use case modeling, which facilitates the elicitation and modeling functional security requirements at the requirements phase. Misuse case modeling is an effective vehicle to potentially identify a large subset of these threats. It is therefore crucial to develop high quality misuse case models otherwise end system developed will be vulnerable to security threats. Templates to describe misuse cases are populated with syntax-free natural language content. The inherent ambiguity of syntax-free natural language coupled with the crucial role of misuse case models in development can have a very detrimental effect. This paper proposes a structure that will guide misuse case authors towards developing consistent misuse case models. This paper also presents a process that utilizes this structure to ensure the consistency of misuse case models as they evolve, eliminating potential damages caused by inconsistencies. A tool was developed to provide automation support for the proposed structure and process. The feasibility and application of this approach were demonstrated using two real-world case studies.  相似文献   

3.
《Computers & Geosciences》2006,32(8):1169-1181
This work presents a methodology for the refinement of shuttle radar topographic mission (SRTM-90 m) data available for South America to enable detailed watershed studies in Amazonia. The original data were pre-processed to properly map detailed low-order drainage features and allowed digital estimates of morphometric variables. Spatial-resolution refinement (3″ to 1″, or ∼90 to ∼30 m) through data kriging was found to be an interesting solution to construct digital elevation models (DEMs) with more adequate presentation of landforms than the original data. The refinement of spatial resolution by kriging interpolation overcame the main constraints for drainage modeling with original SRTM-90 m, such as spatial randomness, artifacts and unrealistic presentation due to pixel size. Kriging with a Gaussian semivariogram model caused a smoothing of the resulting DEM, but the main features for drainage modeling were preserved. Canopy effects on the modeled surface represented the main remaining limitation for terrain analysis after pre-processing. Data regarding a small watershed in Amazonas (∼38 km2), Brazil, were evaluated through visualization techniques, morphometric analyses and plot diagrams of the results. The data showed limitations for use in the original form, but could be applied for watershed modeling at relatively detailed scales after the described pre-processing.  相似文献   

4.
Secure software engineering is concerned with developing software systems that will continue delivering its intended functionality despite a multitude of harmful software technologies that can attack these systems from anywhere and at anytime. Misuse cases and mal-activity diagrams are two techniques to model functional security requirements address security concerns early in the development life cycle. This allows system designers to equip their systems with security mechanisms built within system design rather than relying on external defensive mechanisms. In a model-driven engineering process, misuse cases are expected to drive the construction of mal-activity diagrams. However, a systematic approach to transform misuse cases into mal-activity diagrams is missing. Therefore, this process remains dependent on human skill and judgment, which raises the risk of developing mal-activity diagrams that are inconsistent with the security requirements described in misuse cases, leading to the development of an insecure system. This paper presents an authoring structure for misuse cases and a transformation technique to systematically perform this desired model transformation. A study was conducted to evaluate the proposed technique using 46 attack stories outlined in a book by a former well-known hacker (Mitnick and Simon in The art of deception: controlling the human element of security, Wiley, Indianapolis, 2002). The results indicate that applying the proposed technique produces correct mal-activity diagrams from misuse cases.  相似文献   

5.
High school students’ learning outcomes was examined comparing exploratory vs. worked simulations. The effects of added icons and students’ executive functions were also examined. In Study 1, urban high school students (N = 84) were randomly assigned to one of four versions of a web-based simulation of kinetic molecular theory that varied in instructional format (exploratory vs. worked simulation) and representation (added icons vs. no added icons). Learning was assessed at two levels: comprehension and transfer. For transfer, a main effect was found for instructional format: the exploratory condition yielded greater levels of transfer than the worked simulation. Study 2 used the same conditions and a more complex simulation, the ideal gas law, with a similar sample of students (N = 67). For transfer, an interaction between instructional format and executive functions was found: Whereas students with higher levels of executive functions had better transfer with the exploratory condition, students with lower levels of executive functions had better transfer with the guided simulations. Results are discussed in relation to current theories of instructional design and learning.  相似文献   

6.
ContextMemory safety errors such as buffer overflow vulnerabilities are one of the most serious classes of security threats. Detecting and removing such security errors are important tasks of software testing for improving the quality and reliability of software in practice.ObjectiveThis paper presents a goal-oriented testing approach for effectively and efficiently exploring security vulnerability errors. A goal is a potential safety violation and the testing approach is to automatically generate test inputs to uncover the violation.MethodWe use type inference analysis to diagnose potential safety violations and dynamic symbolic execution to perform test input generation. A major challenge facing dynamic symbolic execution in such application is the combinatorial explosion of the path space. To address this fundamental scalability issue, we employ data dependence analysis to identify a root cause leading to the execution of the goal and propose a path exploration algorithm to guide dynamic symbolic execution for effectively discovering the goal.ResultsTo evaluate the effectiveness of our proposed approach, we conducted experiments against 23 buffer overflow vulnerabilities. We observed a significant improvement of our proposed algorithm over two widely adopted search algorithms. Specifically, our algorithm discovered security vulnerability errors within a matter of a few seconds, whereas the two baseline algorithms failed even after 30 min of testing on a number of test subjects.ConclusionThe experimental results highlight the potential of utilizing data dependence analysis to address the combinatorial path space explosion issue faced by dynamic symbolic execution for effective security testing.  相似文献   

7.
As a part of the research project aimed at developing a thermodynamic database of the La–Sr–Co–Fe–O system for applications in Solid Oxide Fuel Cells (SOFCs), the Co–Fe–O subsystem was thermodynamically re-modeled in the present work using the CALPHAD methodology. The solid phases were described using the Compound Energy Formalism (CEF) and the ionized liquid was modeled with the ionic two-sublattice model based on CEF. A set of self-consistent thermodynamic parameters was obtained eventually. Calculated phase diagrams and thermodynamic properties are presented and compared with experimental data. The modeling covers a temperature range from 298 K to 3000 K and oxygen partial pressure from 10−16 to 102 bar. A good agreement with the experimental data was shown. Improvements were made as compared to previous modeling results.  相似文献   

8.
ContextModel-Driven Development (MDD) is an alternative approach for information systems development. The basic underlying concept of this approach is the definition of abstract models that can be transformed to obtain models near implementation. One fairly widespread proposal in this sphere is that of Model Driven Architecture (MDA). Business process models are abstract models which additionally contain key information about the tasks that are being carried out to achieve the company’s goals, and two notations currently exist for modelling business processes: the Unified Modelling Language (UML), through activity diagrams, and the Business Process Modelling Notation (BPMN).ObjectiveOur research is particularly focused on security requirements, in such a way that security is modelled along with the other aspects that are included in a business process. To this end, in earlier works we have defined a metamodel called secure business process (SBP), which may assist in the process of developing software as a source of highly valuable requirements (including very abstract security requirements), which are transformed into models with a lower abstraction level, such as analysis class diagrams and use case diagrams through the approach presented in this paper.MethodWe have defined all the transformation rules necessary to obtain analysis class diagrams and use case diagrams from SBP, and refined them through the characteristic iterative process of the action-research method.ResultsWe have obtained a set of rules and a checklist that make it possible to automatically obtain a set of UML analysis classes and use cases, starting from SBP models. Our approach has additionally been applied in a real environment in the area of the payment of electrical energy consumption.ConclusionsThe application of our proposal shows that our semi-automatic process can be used to obtain a set of useful artifacts for software development processes.  相似文献   

9.
10.
The melting behavior of ß-boron at the boron-rich side of the B–C binary phase diagram is a long standing question whether eutectic or peritectic. Floating zone experiments have been employed to determine the melting type on a series of C-containing feed-rods prepared by powder metallurgy and sinter techniques. Melting point data as a function of carbon-content clearly yielded a peritectic reaction isotherm: L+B4+δC=(ßB). The partition coefficient of carbon is ~2.6. The experimental melting point data were used to improve the existing thermodynamic modeling of the system B–C. Relative to the thermodynamically accepted melting point of pure ßB (TM=2075 °C), the calculated reaction isotherm is determined at 2100.6 °C, a peritectic point at 0.75 at% C and a maximum solid solubility of 1.43 at% C in (ßB) at reaction temperature. With the new melting data the refractory system Hf–B–C has been recalculated and the liquidus surface is presented. The influence of the melting behavior of (ßB) on the phase reactions in the B-rich corner of M–B–C diagrams will be discussed and demonstrated in case of the Ti–B–C system.  相似文献   

11.
《Applied ergonomics》2011,42(1):138-145
IntroductionSubjective workload measures are usually administered in a visual–manual format, either electronically or by paper and pencil. However, vocal responses to spoken queries may sometimes be preferable, for example when experimental manipulations require continuous manual responding or when participants have certain sensory/motor impairments. In the present study, we evaluated the acceptability of the hands-free administration of two subjective workload questionnaires – the NASA Task Load Index (NASA-TLX) and the Multiple Resources Questionnaire (MRQ) – in a surgical training environment where manual responding is often constrained.MethodSixty-four undergraduates performed fifteen 90-s trials of laparoscopic training tasks (five replications of 3 tasks – cannulation, ring transfer, and rope manipulation). Half of the participants provided workload ratings using a traditional paper-and-pencil version of the NASA-TLX and MRQ; the remainder used a vocal (hands-free) version of the questionnaires. A follow-up experiment extended the evaluation of the hands-free version to actual medical students in a Minimally Invasive Surgery (MIS) training facility.ResultsThe NASA-TLX was scored in 2 ways – (1) the traditional procedure using participant-specific weights to combine its 6 subscales, and (2) a simplified procedure – the NASA Raw Task Load Index (NASA-RTLX) – using the unweighted mean of the subscale scores. Comparison of the scores obtained from the hands-free and written administration conditions yielded coefficients of equivalence of r = 0.85 (NASA-TLX) and r = 0.81 (NASA-RTLX). Equivalence estimates for the individual subscales ranged from r = 0.78 (“mental demand”) to r = 0.31 (“effort”). Both administration formats and scoring methods were equally sensitive to task and repetition effects. For the MRQ, the coefficient of equivalence for the hands-free and written versions was r = 0.96 when tested on undergraduates. However, the sensitivity of the hands-free MRQ to task demands (ηpartial2 = 0.138) was substantially less than that for the written version (ηpartial2 = 0.252). This potential shortcoming of the hands-free MRQ did not seem to generalize to medical students who showed robust task effects when using the hands-free MRQ (ηpartial2 = 0.396). A detailed analysis of the MRQ subscales also revealed differences that may be attributable to a “spillover” effect in which participants’ judgments about the demands of completing the questionnaires contaminated their judgments about the primary surgical training tasks.ConclusionVocal versions of the NASA-TLX are acceptable alternatives to standard written formats when researchers wish to obtain global workload estimates. However, care should be used when interpreting the individual subscales if the object is to make comparisons between studies or conditions that use different administration modalities. For the MRQ, the vocal version was less sensitive to experimental manipulations than its written counterpart; however, when medical students rather than undergraduates used the vocal version, the instrument’s sensitivity increased well beyond that obtained with any other combination of administration modality and instrument in this study. Thus, the vocal version of the MRQ may be an acceptable workload assessment technique for selected populations, and it may even be a suitable substitute for the NASA-TLX.  相似文献   

12.
BackgroundSource code size in terms of SLOC (source lines of code) is the input of many parametric software effort estimation models. However, it is unavailable at the early phase of software development.ObjectiveWe investigate the accuracy of early SLOC estimation approaches for an object-oriented system using the information collected from its UML class diagram available at the early software development phase.MethodWe use different modeling techniques to build the prediction models for investigating the accuracy of six types of metrics to estimate SLOC. The used techniques include linear models, non-linear models, rule/tree-based models, and instance-based models. The investigated metrics are class diagram metrics, predictive object points, object-oriented project size metric, fast&&serious class points, objective class points, and object-oriented function points.ResultsBased on 100 open-source Java systems, we find that the prediction model built using object-oriented project size metric and ordinary least square regression with a logarithmic transformation achieves the highest accuracy (mean MMRE = 0.19 and mean Pred(25) = 0.74).ConclusionWe should use object-oriented project size metric and ordinary least square regression with a logarithmic transformation to build a simple, accurate, and comprehensible SLOC estimation model.  相似文献   

13.
In recent years, several design notations have been proposed to model domain-specific applications or reference architectures. In particular, Conallen has proposed the UML Web Application Extension (WAE): a UML extension to model Web applications. The aim of our empirical investigation is to test whether the usage of the Conallen notation supports comprehension and maintenance activities with significant benefits, and whether such benefits depend on developers ability and experience. This paper reports and discusses the results of a series of four experiments performed in different locations and with subjects possessing different experience—namely, undergraduate students, graduate students, and research associates—and different ability levels. The experiments aim at comparing performances of subjects in comprehension tasks where they have the source code complemented either by standard UML diagrams or by diagrams stereotyped using the Conallen notation. Results indicate that, although, in general, it is not possible to observe any significant benefit associated with the usage of stereotyped diagrams, the availability of stereotypes reduces the gap between subjects with low skill or experience and highly skilled or experienced subjects. Results suggest that organizations employing developers with low experience can achieve a significant performance improvement by adopting stereotyped UML diagrams for Web applications.  相似文献   

14.
ContextDefect prediction research mostly focus on optimizing the performance of models that are constructed for isolated projects (i.e. within project (WP)) through retrospective analyses. On the other hand, recent studies try to utilize data across projects (i.e. cross project (CP)) for building defect prediction models for new projects. There are no cases where the combination of within and cross (i.e. mixed) project data are used together.ObjectiveOur goal is to investigate the merits of using mixed project data for binary defect prediction. Specifically, we want to check whether it is feasible, in terms of defect detection performance, to use data from other projects for the cases (i) when there is an existing within project history and (ii) when there are limited within project data.MethodWe use data from 73 versions of 41 projects that are publicly available. We simulate the two above-mentioned cases, and compare the performances of naive Bayes classifiers by using within project data vs. mixed project data.ResultsFor the first case, we find that the performance of mixed project predictors significantly improves over full within project predictors (p-value < 0.001), however the effect size is small (Hedgesg = 0.25). For the second case, we found that mixed project predictors are comparable to full within project predictors, using only 10% of available within project data (p-value = 0.002, g = 0.17).ConclusionWe conclude that the extra effort associated with collecting data from other projects is not feasible in terms of practical performance improvement when there is already an established within project defect predictor using full project history. However, when there is limited project history, e.g. early phases of development, mixed project predictions are justifiable as they perform as good as full within project models.  相似文献   

15.
《Parallel Computing》2013,39(10):615-637
A key point for the efficient use of large grid systems is the discovery of resources, and this task becomes more complicated as the size of the system grows up. In this case, large amounts of information on the available resources must be stored and kept up-to-date along the system so that it can be queried by users to find resources meeting specific requirements (e.g. a given operating system or available memory). Thus, three tasks must be performed, (1) information on resources must be gathered and processed, (2) such processed information has to be disseminated over the system, and (3) upon users’ requests, the system must be able to discover resources meeting some requirements using the processed information. This paper presents a new technique for the discovery of resources in grids which can be used in the case of multi-attribute (e.g. {OS = Linux & memory = 4 GB}) and range queries (e.g. {50 GB < disk-space < 100 GB}). This technique relies on the use of content summarisation techniques to perform the first task mentioned before and strives at the main drawback found in proposals from literature using summarization. This drawback is related to scalability, and is tackled by means of using Peer-to-Peer (P2P) techniques, namely Routing Indices (RIs), to perform the second and third tasks.Another contribution of this work is a performance evaluation conducted by means of simulations of the EU DataGRID Testbed which shows the usefulness of this approach compared to other proposals from literature. More specifically, the technique presented in this paper improves on the scalability and produces good performance. Besides, the parameters involved in the summary creation have been tuned and the most suitable values for the presented test case have been found.  相似文献   

16.
It is increasingly common to use languages and notations, mainly of a graphical nature, to assist in the design and specification of learning systems. There are several proposals, although few of them support the modeling of collaborative tasks. In this paper, we identify the main features to be considered for modeling this kind of activities and we propose the use of the CIAN notation for this purpose. In this work, we also try to empirically analyze the quality (in particular the understandability) of that notation. To this end, three empirical studies have been conducted. In these experiments we used several sources of information: subjective perception of the designers, their profiles and their performance on a set of understandability exercises, as well as the physical evidence provided by an eye tracker device. The results obtained denote positive perceptions about the use of the CIAN notation for modeling collaborative learning activities.  相似文献   

17.
ObjectiveThe purpose of this study was to assess associations between depression and problematic internet use (PIU) among female college students, and determine whether Internet use time moderates this relationship.MethodThis cross-sectional survey included 265 female college students from four U.S. universities. Students completed the Patient Health Questionnaire-9 (PHQ-9), the Problematic and Risky Internet Use Screening Scale (PRIUSS) and self-reported daily Internet use. Analyses included multivariate analysis of variance and Poisson regression.ResultsParticipants reported mean age of 20.2 years (SD = 1.7) and were 84.9% Caucasian. The mean PHQ-9 score was 5.4 (SD = 4.6); the mean PRIUSS score was 16.4 (SD = 11.1). Participants’ risk for PIU increased by 27% with each additional 30 min spent online using a computer (RR = 1.27, 95% CI: 1.14–1.42, p < .0001). Risk for PIU was significantly increased among those who met criteria for severe depression (RR = 8.16 95% CI: 4.27–15.6, p < .0001). The PHQ-9 items describing trouble concentrating, psychomotor dysregulation and suicidal ideation were most strongly associated with PIU risk.ConclusionsThe positive relationship between depression and PIU among female college students supports screening for both conditions, particularly among students reporting particular depression symptoms.  相似文献   

18.
Seven compounds with pyridine as the backbone modified by carbazole moiety, bromine atom and fluorine atom were synthesized. Compounds 1, 2, 3 with bromo substitution at the 2-position and carbazole modification at the 5-position of pyridine emit not only a sharp blue singlet fluorescence but also a wide banded excimer-based orange emission. The two colors coming from a single molecule can be used to fabricate a simplified white light emitting device. The electroluminescence based on 1 and 2 exhibits white-light emission with CIE coordinates of x = 0.25 and y = 0.30 for 1 and x = 0.33 and y = 0.37 for 2 at high current densities, very close to pure white emission. In addition, the role of bromo-substitution at pyridine is concluded to be essential to generate molecular interaction thus an excimer emission.  相似文献   

19.
PurposeTo compare the diagnostic performances of artificial neural networks (ANNs) and multivariable logistic regression (LR) analyses for differentiating between malignant and benign lung nodules on computed tomography (CT) scans.MethodsThis study evaluated 135 malignant nodules and 65 benign nodules. For each nodule, morphologic features (size, margins, contour, internal characteristics) on CT images and the patient’s age, sex and history of bloody sputum were recorded. Based on 200 bootstrap samples generated from the initial dataset, 200 pairs of ANN and LR models were built and tested. The area under the receiver operating characteristic (ROC) curve, Hosmer–Lemeshow statistic and overall accuracy rate were used for the performance comparison.ResultsANNs had a higher discriminative performance than LR models (area under the ROC curve: 0.955 ± 0.015 (mean ± standard error) and 0.929 ± 0.017, respectively, p < 0.05). The overall accuracy rate for ANNs (90.0 ± 2.0%) was greater than that for LR models (86.9 ± 1.6%, p < 0.05). The Hosmer–Lemeshow statistic for the ANNs was 8.76 ± 6.59 vs. 6.62 ± 4.03 (p > 0.05) for the LR models.ConclusionsWhen used to differentiate between malignant and benign lung nodules on CT scans based on both objective and subjective features, ANNs outperformed LR models in both discrimination and clinical usefulness, but did not outperform for the calibration.  相似文献   

20.
Purpose. To develop an automated classifier based on adaptive neuro-fuzzy inference system (ANFIS) to differentiate between normal and glaucomatous eyes from the quantitative assessment of summary data reports of the Stratus optical coherence tomography (OCT) in Taiwan Chinese population.Methods. This observational non-interventional, cross-sectional, case–control study included one randomly selected eye from each of the 341 study participants (135 patients with glaucoma and 206 healthy controls). Measurements of glaucoma variables (retinal nerve fiber layer thickness and optic nerve head topography) were obtained by Stratus OCT. Decision making was performed in two stages: feature extraction using the orthogonal array and the selected variables were treated as the feeder to adaptive neuro-fuzzy inference system (ANFIS), which was trained with the back-propagation gradient descent method in combination with the least squares method. With the Stratus OCT parameters used as input, receiver operative characteristic (ROC) curves were generated by ANFIS to classify eyes as either glaucomatous or normal.Results. The mean deviation was −0.67 ± 0.62 dB in the normal group and −5.87 ± 6.48 dB in the glaucoma group (P < 0.0001). The inferior quadrant thickness was the best individual parameter for differentiating between normal and glaucomatous eyes (ROC area, 0.887). With ANFIS technique, the ROC area was increased to 0.925.Conclusions. With Stratus OCT parameters used as input, the results from ANFIS showed promise for discriminating between glaucomatous and normal eyes. ANFIS may be preferable since the output concludes the if–then rules and membership functions, which enhances the readability of the output.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号