首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An intermittently-used two-unit parallel system is studied using correlated alternating renewal processes (Subramanian et al. 1983 SUBRAMANIAN , R. , SARMA , Y. V. S. , and NATARAJAN , R. , ( 1983 ), J. Math. Phys. Sci. , 17 , 157 . [Google Scholar]). The system measures variables such as mean time to the first disappointment event and the expected number of dissappointments in the interval of time (0, t).  相似文献   

2.
This article examines visitor interactions with and through a physical/digital installation designed for an open-air museum that displays historic buildings and ways of life from the past. The installation was designed following the “Assembly” design scheme proposed by Fraser et al. (2003 Fraser, M., Stanton, D., Hui Ng, K., Benford, S., O’Malley, C., Bowers, J., … Hindmarsh, J. (2003). Assembling history: Achieving coherent experiences with diverse technologies. Proceedings of ECSCW 2005, 179198. Norwell, MA: Kluwer. [Google Scholar]), and centered around five principles for the design of interactive experiences. We discuss how the Assembly framework was adapted and applied to our work on the installation called Reminisce, and we then present qualitative data gathered through the shadowing and naturalistic observations of small groups of visitors using Reminisce during their exploration of the museum. Through these data excerpts, we illustrate how interaction occurred among visitors and with the assembly. We reflect on the guiding principles of the adapted Assembly framework and on their usefulness for the design of place-specific interactional opportunities in heritage settings. Results from the empirical study show that the adapted Assembly principles provide HCI (human–computer interaction) researchers and designers with ways in which to flexibly support collocated interactions at heritage sites across artifacts and locations in ways that both complement and enrich the physical setting of the visit and its character.  相似文献   

3.
《Ergonomics》2012,55(9):1175-1180
Abstract

Recent research of [Scholcover and Gillan (2018 Scholcover, F., and D. J. Gillan. 2018. “Using Temporal Sensitivity to Predict Performance under Latency in Teleoperation.” Human Factors 60.1:8091. doi:10.1177/0018720817734727.[Crossref], [Web of Science ®] [Google Scholar])] has shown experimentally that system transmission delay has a linear effect on the time taken to perform a complex tracking task with a simple teleoperated robot. This note shows that, for the case of moving a robot through a straight path, this relationship is predicted. The result is a simple modification of Drury’s law to take into account the system delay. This work extends the model for performance under intermittent illumination of Drury to the effects of fixed delays in task performance, occurring with teleoperated robots. In all cases, there was empirical evidence for the predicted linear relationship.

Practitioner summary: When there is a delay in system response for robotic teleoperation between a control input and system output, movement time (MT) is increased and the increased times are linearly related to the system delay. This is true for zero and first-order control and for delays occurring before and after the control action.  相似文献   

4.
5.
We have developed an advanced version of our yield estimation method [Ferencz et al., 2004 Cs, Ferencz, Bognár, P., Lichtenberger, J., Hamar, D., Gy, Tarcsai, Timár, G., Molnár, G., Sz, Pásztor, Steinbach, P., Székely, B., Ferencz, O.E. and Ferencz-Árkos, I. 2004. Crop yield estimation by satellite remote sensing. International Journal of Remote Sensing, 25: 41134149.  [Google Scholar], Crop yield estimation by satellite remote sensing. International Journal of Remote Sensing, 25, pp. 4113–4149], that is able to provide reliable forecasts for corn and wheat, several weeks before the harvest. The forecasting method is based on the data of the Advanced Very High Resolution Radiometer (AVHRR) instruments of the National Oceanic and Atmospheric Administration's (NOAA) Polar Orbiting Environmental Satellites (POES). The method was applied to Hungary between the years 1996 and 2000. The forecasted yield values are all within 5% reliability with respect to the actual yield data produced by classic (non-satellite based) methods and provided by the Hungarian Statistical Office, with the exception of 1997, where the absolute error is about 8%.  相似文献   

6.
Brad Nicholson 《Ergonomics》2014,57(9):1353-1365
Situational awareness is recognised as an important factor in the performance of individuals and teams in dynamic decision-making (DDM) environments (Salmon et al. 2014 Salmon, P. M., N. A.Stanton, G. H.Walker, D.Jenkins, D.Ladva, L.Rafferty, and M.Young. 2014. “Measuring SA in Complex Systems: Comparison of Measurement Study.” International Journal of Industrial Ergonomics39: 490500.[Crossref], [Web of Science ®] [Google Scholar]). The present study was designed to investigate whether the scores on the WOMBAT? Situational Awareness and Stress Tolerance Test (Roscoe and North 1980 Roscoe, S. N., and R.North. 1980. “Prediction of Pilot Performance.” In Aviation Psychology, edited by S. N.Roscoe, 123127. Ames: The Iowa State University Press. [Google Scholar]) would predict the transfer of DDM performance from training under different levels of cognitive load to a novel situation. Participants practised a simulated firefighting task under either low or high conditions of cognitive load and then performed a (transfer) test in an alternative firefighting environment under an intermediate level of cognitive load. WOMBAT? test scores were a better predictor of DDM performance than scores on the Raven Matrices. Participants with high WOMBAT? scores performed better regardless of their training condition. Participants with recent gaming experience who practised under low cognitive load showed better practice phase performance but worse transfer performance than those who practised under high cognitive load.

Practitioner Summary: The relationship between task experience, situational awareness ability, cognitive load and the transfer of dynamic decision-making (DDM) performance was investigated. Results showed that the WOMBAT? test predicted transfer of DDM performance regardless of task cognitive load. The effects of cognitive load on performance varied according to previous task-relevant experience.  相似文献   

7.
Indian Remote Sensing Satellite (IRS-P4) multi-frequency scanning microwave radiometer (MSMR) provides geophysical parameters like sea surface temperature (SST), sea surface wind speed (SSWS), integrated water vapour (IWV) and cloud liquid water (CLW). The retrieval procedure of these parameters given by Gohil et al. (2000 Gohil, B.S., Mathur, A.K. and Varma, A.K. Geophysical parameter retrieval over global oceans from IRS-P4 (MSMR). Preprints, Fifth Pacific Ocean Remote Sensing Conference. December5–82000, Goa. pp.207211. Goa, , India: National Institute of Oceanography.  [Google Scholar], Geophysical parameter retrieval over global oceans from IRS-P4 (MSMR). In Preprints, Fifth Pacific Ocean Remote Sensing Conference, 5–8 December 2000, Goa, India (Goa: National Institute of Oceanography), pp. 207–211) was summarized by Sharma et al. (2002 Sharma, R., Babu, K.N., Mathur, A.K. and Ali, M.M. 2002. Identification of large scale atmospheric and oceanic features from IRS-P4 multifrequency scanning microwave radiometer: preliminary results. Journal of Atmospheric and Oceanic Technology, 19: 11271134. [Crossref], [Web of Science ®] [Google Scholar], Identification of large scale atmospheric and oceanic features from IRS-P4 multifrequency scanning microwave radiometer: preliminary results. Journal of Atmospheric and Oceanic Technology, 19, pp. 1127–1134) and Jena (2007 Jena, B. 2007. Studies on the retrieval, validation and applications of geophysical parameters from IRS-P4 (MSMR) data, Orissa: PhD thesis, Berhampur University.  [Google Scholar], Studies on the retrieval, validation and applications of geophysical parameters from IRS-P4 (MSMR) data. PhD thesis, Berhampur University, Orissa). Demonstration of self-consistency of these parameters has primary scientific importance. This article deals with the validation of MSMR geophysical parameters such as SST and SSWS with in situ observations (buoy data) over the north Indian Ocean during 2000. Result shows that the MSMR-derived SST and SSWS can be utilized for several applications because of their reasonable accuracy and coverage even under cloudy condition.  相似文献   

8.
RÉSUMÉ

La numération a fait l'objet d'études importantes, notamment au Québec, durant les années 1980–2000. Les études ont révélé sa complexité ainsi que les difficultés que posent à la fois son enseignement et son apprentissage. La présente étude s'inscrit dans le prolongement d'une suite d'études didactiques sur ces difficultés. Plus précisément, elle investigue les connaissances d'élèves québécois de 3e année primaire sur la numération de position et les compare à celles d'une recherche phare menée par Bednarz et Dufour-Janvier dont les résultats ont fait l'objet de plusieurs publications (1982 Bednarz, N. et Dufour-Janvier, B. (1982). The understanding of numeration in primary school. Educational Studies in Mathematics, 13, 3357. [Google Scholar], 1984a Bednarz, N. et Dufour-Janvier, B. (1984a). La numération: les difficultés suscitées par son apprentissage ; une stratégie didactique cherchant à favoriser une meilleure compréhension. Grand N, 33, 531. [Google Scholar], 1984b Bednarz, N. et Dufour-Janvier, B. (1984b). La numération: une stratégie didactique cherchant à favoriser une meilleure compréhension. Grand N, 34, 117. [Google Scholar], 1988 Bednarz, N. et Dufour-Janvier, B. (1988). A constructivist approach to numeration in primary school: Results of a three year intervention with the same group of children. Educational Studies in Mathematics, 19, 299331. [Google Scholar]). Les résultats de notre étude montrent que si 30 ans ont passé depuis la recherche de Bednarz et Dufour-Janvier, peu de changements sont observés dans les conduites mathématiques des élèves en numération. Quelques hypothèses, relatives à la fois aux contraintes d'enseignement et aux difficultés spécifiques d'appropriation de la numération sont, au terme de l'article, formulées pour expliquer ces résultats.  相似文献   

9.
《Ergonomics》2012,55(6):519-530
‘Shrinking targets’ are targets whose size diminishes with time. The task studied is a modification of Fitts' (1954 Fitts, P. M. 1954. The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, 67(6): 381391. [Crossref] [Google Scholar]) paradigm, with the difference that, as soon as the movement is started, the target size reduces at a constant rate until it finally vanishes. Very little research has been reported on this problem apart from Johnson and Hart (1987 Johnson, W. W. and Hart, S. G. Step tracking shrinking targets. Proceedings of the Human Factors Society 31st annual meeting. October1987, New York City. pp.248252. Santa Monica, CA: HFES.  [Google Scholar]) and Hancock and Caird (1993 Hancock, P. A. and Caird, J. K. 1993. Experimental evaluation of a model of mental workload. Human Factors, 35(3): 413429. [Crossref], [PubMed], [Web of Science ®] [Google Scholar]). Two experiments are reported aimed at determining the parameters that affect the movement time and the probability of capturing a target when there are different amplitudes of movement, target widths and shrink rates. A multiplicative model is required to describe movement time data, which is dependent on Fitts' Index of Difficulty, the shrink rate and the product of these two variables. An alternative model describes the critical movement time, for a specified probability of target capture, in a modified form of Fitts' Law.

Statement of Relevance: Modifications of Fitts' Law have been developed for many different movement tasks. Shrinking targets occur in circumstances such as gunnery and in computer games, where a target is moving away from the person. An expression is developed for the critical time to capture the target in terms of a modified form of Fitts' Law.  相似文献   

10.
Inter‐annual variation of monthly river water level (RWL) at Ladário Hydrological Station was investigated in order to study the Upper Paraguay River Basin flood conditions. The correlations of RWL to Precipitation (PCP) and to Normalized Difference Vegetation Index (NDVI) were analysed. A combined use of PCP and NDVI data is proposed to predict RWL at Ladário Hydrological Station, which monitors the water collected from the upper part of the Upper Paraguay River Basin watershed. Inundation area estimation as a function of RWL proposed by Hamilton et al. (Archeological Hydrobiology, 137, 1–23, 1996 Hamilton, S. K., Sippel, S. J. and Melack, J. M. 1996. Inundation patterns in the Pantanal wetland of South America determined from passive microwave remote sensing.. Archeological Hydrobiology, 137: 123.  [Google Scholar]) was applied to predict Pantanal inundation area using both recorded and predicted RWL data of 1981–2000.

Our technique demonstrated that by applying the RWL prediction model and inundation area estimation model proposed by Hamilton et al. (1996 Hamilton, S. K., Sippel, S. J. and Melack, J. M. 1996. Inundation patterns in the Pantanal wetland of South America determined from passive microwave remote sensing.. Archeological Hydrobiology, 137: 123.  [Google Scholar]), the Pantanal inundation area extension could be predicted 1 month in advance with reasonable success. Therefore, the statistical approach presented herewith may provide a useful tool for predicting RWL and hence for preventing flood damage in high RWL periods as well as controlling river transportation traffic in order to prevent riverbank erosion during low RWL periods. For future studies, an adequate hydrological simulation model based on a high accuracy digital elevation model and a rainfall forecasting system such as a radar system are needed to fulfil real‐time flood advancing prediction and mitigation tasks.  相似文献   

11.
This paper extends the results of Zhang et.al. (1997 Zhang, C., Zhang, J. and Furuta, K. 1997. Performance analysis of discrete periodically time varying controllers. Automatica, 33: 619634. [Crossref] [Google Scholar]) for linear periodically time-varying control to general linear time-varying (LTV) control. It is shown that linear time-invariant (LTI) control provides strictly better control performance than linear strictly time-varying control if l2 disturbance rejection of LTI plants is considered. The analysis is carried out in the frequency domain. This approach provides not only new results on disturbance rejection of LTV control but also some new insight into the properties of general LTV systems.  相似文献   

12.
Concerning industrial automation, the management of abnormal situations becomes more important everyday. The ability to detect, isolate, and handle abnormal situations in industrial installations, could save huge amounts of money which is normally invested in reparations and/or wasted because of unjustified stoppage of processing plants. In this work, a system for the management of abnormal situations in an artificially gas-lifted well based on agents Abnormal Situations Management System (ASMS) is developed, which is part of the architecture of the industrial automation based on multi-agents systems (SADIA) proposed in Bravo, Aguilar, and Rivas (2004 Bravo , C. , J. Aguilar , and F. Rivas. 2004 . Diseño de una Arquitectura de Automatización Industrial Basada en Sistemas Multiagentes . Revista Ciencia e Ingeniería, Facultad de Ingeniería 25 ( 2 ): 7788 . [Google Scholar]). This agent is based on the intelligent distributed control system based on agents (IDCSBA) reference model proposed in Aguilar, Cerrada, Mousalli, Rivas, and Hidrobo (2005 Aguilar , J. , M. Cerrada , G. Mousalli , F. Rivas , and F. Hidrobo. 2005 . A multiagent model for intelligent distributed control systems . Lecture Notes in Artificial Intelligence 3681 : 191197 . [Google Scholar]). The MASINA methodology (Aguilar, Hidrobo, and Cerrada 2007 Aguilar , J. , F. Hidrobo , and M. Cerrada. 2007 . A methodology to specify multiagent systems . Lecture Notes in Artificial Intelligence 4496 : 92101 . [Google Scholar]) is used in matters of analysis, design, and implementation.  相似文献   

13.
Abstract

GOST-R 34.11-94 is a Russian standard cryptographic hash function that was introduced in 1994 by the Russian Federal Agency for the purposes of information processing, information security, and digital signature. Mendel et al. (2008 Mendel, F., N. Pramstaller, C. Rechberger, M. Kontak, and J. Szmidt. 2008. Cryptanalysis of the GOST hash function, Advances in Cryptology – CRYPTO 2008, vol. 5157, 162–178. [Google Scholar]) and Courtois and Mourouzis (2011 Courtois, N., and T. Mourouzis. 2011. Black-box collision attacks on the compression function of the GOST hash function. SECRYPT. Proceedings of the International Conference on Security and Cryptography, 325332, IEEE. [Google Scholar]) found attacks on the compression function of the GOST-R structure that were basically weaknesses of the GOST-R block cipher (GOST 28147–89, 1989 GOST 28147-89. 1989. Systems of the information treatment, cryptographic security, algorithms of the cryptographic transformation (in Russian). [Google Scholar]). Hence in 2012, it was updated to GOST-R 34.11-2012, which replaced the older one for all its applications from January 2013. GOST-R 34.11-2012 is based on a modified Merkle-Damgård construction. Here we present a modified version of GOST-R 34.11-2012 (Modified GOST-R (MGR) hash). The design of the MGR hash is based on wide-pipe construction, which is also a modified Merkle-Damgård construction. MGR is much more secure as well as three times faster than GOST-R 34.11-2012. Advanced Encryption Standard (AES)-like block ciphers have been used in designing the compression function of MGR because AES is one of the most efficient and secure block ciphers and has been evaluated for more than 14?years. A detailed statistical analysis with a few other attacks on MGR is incorporated into this paper.  相似文献   

14.
The Rayleigh scattering radiance at the top of the atmosphere (TOA) depends on the surface atmospheric pressure. In processing the Coastal Zone Color Scanner (CZCS) imagery, Gordon et al. (Applied Optics, 27, 862–871, 1988 Gordon, H. R., Brown, J. W. and Evans, R. H. 1988. Exact Rayleigh scattering calculations for use with the Nimbus‐7 Coastal Zone Color Scanner.. Applied Optics, 27: pp. 862871. [Crossref] [Google Scholar]) developed a simple formula to account for the Rayleigh radiance changes with the variation of the surface atmospheric pressure. For the atmospheric pressure changes within ±3%, the accuracy of the Gordon et al. (1988 Gordon, H. R., Brown, J. W. and Evans, R. H. 1988. Exact Rayleigh scattering calculations for use with the Nimbus‐7 Coastal Zone Color Scanner.. Applied Optics, 27: pp. 862871. [Crossref] [Google Scholar]) formula in computing the Rayleigh radiance is usually within 0.4%, 0.3%, 0.15% and 0.05% for the wavelengths 412, 443, 555 and 865 nm, respectively. This could result in up to ~3% uncertainty in the derived water‐leaving radiance at the blue wavelengths for very clear atmospheres. To improve the performance, a refinement to the Gordon et al. (1988 Gordon, H. R., Brown, J. W. and Evans, R. H. 1988. Exact Rayleigh scattering calculations for use with the Nimbus‐7 Coastal Zone Color Scanner.. Applied Optics, 27: pp. 862871. [Crossref] [Google Scholar]) formula is developed based on the radiative transfer simulations. The refined scheme can produce Rayleigh radiance with an uncertainty within 0.1% (often within 0.05%) at the blue, while uncertainty is within 0.05% for the green to near‐infrared wavelengths. The refined algorithm has been implemented in the Sea‐viewing Wide Field‐of‐view Sensor (SeaWiFS) data processing system. Results from the SeaWiFS data show the improved ocean colour products in the southern oceans where consistently low atmospheric pressures are usually observed. This could also significantly improve the performance of the Rayleigh radiance computations over the high altitude lakes. In addition, with the refined algorithm, the same Rayleigh radiance tables can be possibly used for the various ocean colour satellite sensors in which there are slightly different sensor spectral band characterizations.  相似文献   

15.
This article uses principal component analysis (PCA) to determine the spatial pattern of total electron content (TEC) anomalies in the ionosphere corresponding to China's Wenchuan Earthquake of 12 May 2008 UTC (Mw?=?7.9). PCA is applied to global ionospheric maps (GIMs) with transforms conducted for the time period from 08:00 to 10:00 UT on 9 May 2008. The GIMs are subdivided into 100 smaller maps (36° longitude and 18° latitude). The smaller maps (71?×?71 pixels) form the transform matrices of corresponding dimensions (2?×?1) through image processing. The transform allows for extreme principal eigenvalues to be assigned to the seismo-ionospheric signature described by Zhao et al. [2008 Zhao, B., Yu, T., Wang, M., Wan, W., Lei, J., Liu, L. and Ning, B. 2008. Is an unusual large enhancement of ionospheric electron density linked with the 2008 great Wenchuan earthquake?. Journal of Geophysical Research, 113: A11304 doi:doi: 10.1029/2008JA013613[Crossref], [Web of Science ®] [Google Scholar], Is an unusual large enhancement of ionospheric electron density linked with the 2008 great Wenchuan earthquake? Journal of Geophysical Research, 113, A11304, doi: 10.1029/2008JA013613.]. Results show that the range of the TEC anomaly declines with height; however, the anomaly becomes more localized and intense at higher altitudes (250–300 km), giving the anomaly the spatial pattern of a downward-facing trumpet.  相似文献   

16.
ABSTRACT

Image hiding methods embed a secret image into a host image. The resultant stego-image does not attract the interceptors that would not detect the differences between the host image and the stego-image. To exploit the great developments in the area of image compression and to improve the quality of stego-image, this paper proposes a new method to embed the secret image into the host image. Basically, the secret image is compressed and then embedded into host image. The embedding method is based on the Optimal Pixel Adjustment Process (OPAP) and genetic algorithm. In the paper we addressed the important issues to build such systems. The experimental results showed that the proposed method can improve the quality from 60% to 80% when compared with the simple Least Significant Bit (LSB) replacement methods. Adding to that, the mean square error of the stego-image is much lower compared with other methods (Chan & Cheng, 2004 Chan, C.K. and Cheng, L.M. 2004. Hiding data in images by simple LSB substitution. Pattern Recognition, 37(3): 469474. [Crossref], [Web of Science ®] [Google Scholar]; Chang, Hsiao, & Chan, 2003 Chang, C.C., Hsiao, J.Y. and Chan, C.S. 2003. Finding optimal least-significant-bit substitution in image hiding by dynamic programming strategy. Pattern Recognition, 36(7): 15831595. [Crossref], [Web of Science ®] [Google Scholar]; Thien & Lin, 2003 Thien, C.C. and Lin, J.C. 2003. A simple and high-hiding capacity method for hiding digit-by-digit data in images based on modulus function. Pattern Recognition, 36(12): 28752881. [Crossref], [Web of Science ®] [Google Scholar]; Tseng, Chan, Ho, & Chu, 208; Wang, Lin, & Lin, 2001 Wang, R.Z., Lin, C.F. and Lin, J.C. 2001. Image hiding by optimal LSB substitution and genetic algorithm. Pattern Recognition, 34(3): 671683. [Crossref], [Web of Science ®] [Google Scholar]). Also, the proposed technique improves capacity. In other words, we can embed a secret image with size 450?×?450 inside a hosting image with size 512?×?512.  相似文献   

17.
18.
Leap et al. (2016 Leap, T., T. McDevitt, K. Novak, and N. Siermine. 2016. Further improvements to the Bauer-Millward attack on the Hill cipher. Cryptologia 40:116.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) reduced the time complexity of the Bauer-Millward (2007 Bauer, C., and K. Millward. 2007. Cracking matrix encryption row by row. Cryptologia 31(1):7683.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]) ciphertext-only attack on the Hill cipher from 𝒪(Ln) to 𝒪(Ln?1), where L is the length of the alphabet, and n is the block size. This article presents an attack that reduces the complexity to 𝒪(Ln?1?s), 0?≤?s?≤?n???1. The practical limitation on the size of s is the memory available on the computer being used for the attack. Specifically, the computer must be able to hold Ls integer arrays of length N, where N is the number of blocks of ciphertext. The key idea is not to iterate over potential rows of the decryption matrix, but to iterate over randomly chosen characters in the plaintext. This attack also admits a straightforward parallel implementation on multiple processors to further decrease the run time of the attack.  相似文献   

19.
Data classification tasks often concern objects described by tens or even hundreds of features. Classification of such high-dimensional data is a difficult computational problem. Feature selection techniques help reduce the number of computations and improve classification accuracy.

In Michalak and Kwasnicka (2006a Michalak , K. , and H. Kwasnicka . 2006a . Correlation-based feature selection strategy in classification problems . Applied Mathematics and Computer Science 16 ( 4 ): 503511 . [Google Scholar], b Michalak , K. , and H. Kwasnicka . 2006b. Correlation-based feature selection strategy in neural classification. In ISDA'06: Proceedings of the sixth international conference on intelligent systems design and applications (ISDA'06) , 741746. Washington , DC , USA : IEEE Computer Society. [Google Scholar]) we proposed a feature selection strategy that selects features in an individual or pairwise manner based on the assessed level of dependence between features. In the case of numerical features, this level of dependence can be expressed numerically using linear correlation coefficients. In this paper, the feature selection problem is addressed in the case of a mixture of nominal and numerical features. The feature similarity measure used in this case is based on the probabilistic dependence between features. This similarity function is used in an iterative feature selection procedure, which we proposed for selecting features prior to classification. Experiments prove that using the probabilistic dependence similarity function along with the presented feature selection procedure can improve computation speed while preserving classification accuracy in the case of mixed nominal and numerical features.  相似文献   

20.
The concept of context honeypot for privacy violation, based on relational databases, was introduced (S.K. Gupta, Damor, Goyal, A. Gupta, & Sabharwal, 2008 Gupta, S.K., Damor, R.G.S., Goyal, V., Gupta, A. and Sabharwal, S. 2008. Context honeypot: A framework for anticipatory privacy violation. Proceedings of the 1st ICETET, : 813818. doi: 10.1109/ICETET.2008.26 [Google Scholar]). Its aim is to confirm or reject the suspicion cast on a user through external stimuli. Its various characteristics such as luring, opaqueness and confirmation of suspicion have not yet been explored. Here, we focus on one of its important characteristics, opaqueness; that is, it should remain invisible to attackers. This paper discusses ways to quantify effectiveness of a context honeypot system in upholding its opaqueness property to suspected attacker. We conducted an experiment by generating a context honeypot system with known suspected attackers and then quantified its effectiveness through the proposed methods. The results obtained validate the methods proposed by us as an effective tool to quantify the effectiveness of the context honeypot in maintaining its opaqueness property.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号