首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Content-based pornographic image detection, in which region-of-interest (ROI) plays an important role, is effective to filter pornography. Traditionally, skin-color regions are extracted as ROI. However, skin-color regions are always larger than the subareas containing pornographic parts, and the approach is difficult to differentiate between human skins and other objects with the skin-colors. In this paper, a novel approach of extracting salient region is presented for pornographic image detection. At first, a novel saliency map model is constructed. Then it is integrated with a skin-color model and a face detection model to capture ROI in pornographic images. Next, a ROI-based codebook algorithm is proposed to enhance the representative power of visual-words. Taking into account both the speed and the accuracy, we fuse speed up robust features (SURF) with color moments (CM). Experimental results show that the precision of our ROI extraction method averagely achieves 91.33%, more precisely than that of using the skin-color model alone. Besides, the comparison with the state-of-the-art methods of pornographic image detection shows that our approach is able to remarkably improve the performance.  相似文献   

2.
Pornographic image/video recognition plays a vital role in network information surveillance and management. In this paper, its key techniques, such as skin detection, key frame extraction, and classifier design, etc, are studied in compressed domain. A skin detection method based on data-mining in compressed domain is proposed firstly and achieves the higher detection accuracy as well as higher speed. Then, a cascade scheme of pornographic image recognition based on selective decision tree ensemble is proposed in order to improve both the speed and accuracy of recognition. A pornographic video oriented key frame extraction solution in compressed domain and an approach of pornographic video recognition are discussed respectively in the end.  相似文献   

3.
Malware propagated via the World Wide Web is one of the most dangerous tools in the realm of cyber‐attacks. Its methodologies are effective, relatively easy to use, and are developing constantly in an unexpected manner. As a result, rapidly detecting malware propagation websites from a myriad of webpages is a difficult task. In this paper, we present LoGos, an automated high‐interaction dynamic analyzer optimized for a browser‐based Windows virtual machine environment. LoGos utilizes Internet Explorer injection and API hooks, and scrutinizes malicious behaviors such as new network connections, unused open ports, registry modifications, and file creation. Based on the obtained results, LoGos can determine the maliciousness level. This model forms a very lightweight system. Thus, it is approximately 10 to 18 times faster than systems proposed in previous work. In addition, it provides high detection rates that are equal to those of state‐of‐the‐art tools. LoGos is a closed tool that can detect an extensive array of malicious webpages. We prove the efficiency and effectiveness of the tool by analyzing almost 0.36 M domains and 3.2 M webpages on a daily basis.  相似文献   

4.
Search engine plays an irreplaceable role in web information organizing and accessing. It is very common for Internet users to query a search engine when retrieving web information. Sensitive data about search engine user’s intentions or behavior can be inferred from his query phrases, the returned results pages, and the webpages he visits subsequently. In order to protect contents of communications from being eavesdropped, some search engines adopt HTTPS by default to provide bidirectional encryption. This only provides an encrypted channel between user and search engine, the majority of webpages indexed in search engines’ results pages are still on HTTP enabled websites and the contents of these webpages can be observed by attackers once the user click on these links. Imitating attackers, we propose a novel approach for attacking secure search through correlating analysis of encrypted search with unencrypted webpages. We show that a simple weighted TF–DF mechanism is sufficient for selecting guessing phrase candidates. Imitating search engine users, by querying these candidates and enumerating webpages indexed in results pages, we can hit the definite query phrases and meanwhile reconstruct user’s web-surfing trails through DNS-based URLs comparison and flow feature statistics-based network traffic analysis. In the experiment including 28 search phrases, we achieved 67.86% hit rate at first guess and 96.43% hit rate within three guesses. Our empirical research shows that HTTPS traffic can be correlated and de-anonymized through HTTP traffic and secured search of search engines are not always secure unless HTTPS by default enabled everywhere.  相似文献   

5.
为了提高网页文本分类的准确性.克服传统的文本分类算法易受网页中虚假、错误信息的影响.提出一种基于链接信息的网页分类算法.通过对K近邻方法的改进.利用当前网页与其父网页的链接信息对网页实沲分类,用空间向量表示待分类网页的父链接信息。在训练集合中找到K篇与该网页链接信息向量最相似的网页,计算该网页所属的类别,通过实验与传统文本分类算法进行了对比,验证了该方法的有效性.  相似文献   

6.
Small‐screen mobile terminals have difficulty accessing existing Web resources designed for large‐screen devices. This paper presents an adaptive transformation method based on webpage semantic features to solve this problem. According to the text density and link density features of the webpages, the webpages are divided into two types: index and content. Our method uses an index‐based webpage transformation algorithm and a content‐based webpage transformation algorithm. Experiment results demonstrate that our adaptive transformation method is not dependent on specific software and webpage templates, and it is capable of enhancing Web content adaptation on small‐screen terminals.  相似文献   

7.
随着多媒体技术的发展,网络日益普及,如何有效控制互联网色情图像的传播已经成为日益紧迫的任务。在基于贝叶斯判定的肤色检测算法(BES)的前提下,提出并描述了一种色情图像分析的算法原理及设计考虑,该算法通过Y-Cb和Y-Cr2个子空间的查询表来建立肤色模型,从而对图像进行检测,对有效识别、过滤色情图像,建立绿色互联网具有重要研究意义。  相似文献   

8.
针对传统关联分析技术应用于网页文本分析上存在的问题,提出一种基于命名实体及实体关系的网页文本关联分析方法.该方法以命名实体和实体关系作为特征来代替传统高频词,首先采用基于向量相似度比较的修正策略来提取网页文本中的命名实体,然后分析Maxfpminer算法并对其进行改进,利用改进的Maxfpminer算法对网页文本进行关联分析.实验结果表明,该方法分析得到的知识模式的有效性和可读性均优于传统方法.  相似文献   

9.
王影 《现代电信科技》2010,(12):40-44,49
随着网络和多媒体技术的发展,互联网中色情图像的传播越来越泛滥,为有效地过滤敏感图像,文章提出一种基于Otsu自适应阈值分割的人体面积比例计算方法。整个检测系统利用人体肤色模型和面部识别模型,并结合面积比例识别算法等图像特征识别技术,实现对网络敏感图像的检测。实验结果表明,该方法具有较高的准确率和较快的实时在线检测速度,具有良好的实用性和应用价值。  相似文献   

10.
基于统计与代码特征分析的网页木马检测模型   总被引:1,自引:0,他引:1  
采用传统的基于特征码比对的方式检测网页木马比较困难,为此提出了一种基于统计与代码特征分析的网页木马检测模型。采用内部特征与外部特征结合,并利用统计学的方法进行综合分析,最终判别待检测网页是否有网页木马。实验表明,该方法可以有效地检测网页木马,提高检测效率和精度,对未知及变形网页木马有一定的检测能力。  相似文献   

11.
互联网敏感图像监控技术的研究   总被引:2,自引:0,他引:2  
本文提出了一种适用于对互联网敏感图像进行监控和检测的解决方案.该方案以实例图像的匹配为基本识别策略,以测试图像的K近邻作为分类依据,创新性地提出了按多模式特征,分组组织训练实例的方法,并在匹配中融入了局部视觉元素的描述.系统在各种测试图像(尤其是人物类图像)中取得了出色的性能.实验结果证明,本系统有效地兼顾了对敏感图像多样性的适应能力和识别效率,相比传统类型策略,能使检测性能(尤其是误检率)得到明显改善.  相似文献   

12.
Aiming at the problem that some information causing harm to the network environment was transmitted through the mirror website so as to bypass the detection,an identification method of malicious mirror website for high-speed network traffic was proposed.At first,fragmented data from the traffic was extracted,and the source code of the webpage was restored.Next,a standardized processing module was utilized to improve the accuracy.Additionally,the source code of the webpage was divided into blocks,and the hash value of each block was calculated by the simhash algorithm.Therefore,the simhash value of the webpage source codes was obtained,and the similarity between the webpage source codes was calculated by the Hamming distance.The page snapshot was then taken and SIFT feature points were extracted.The perceptual hash value was obtained by clustering analysis and mapping processing.Finally,the similarity of webpages was calculated by the perceptual hash values.Experiments under real traffic show that the accuracy of the method is 93.42%,the recall rate is 90.20%,the F value is 0.92,and the processing delay is 20 μs.Through the proposed method,malicious mirror website can be effectively detected in the high-speed network traffic environment.  相似文献   

13.
Extended access control lists (ACLs) are used to filter packets for network security. However, in current network frameworks, ACL rules are not transferred simultaneously with devices that move across network segments. The Internet Engineering Task Force proposed the Locator/Identifier Separation Protocol (LISP), which enables routers (xTRs) to configure ACL rules for blocking immobile endpoint identifiers (EIDs). However, when an EID moves from the original xTR to a new xTR, the ACL rules at the original xTR cannot be transferred with the EID. Thus, the new xTR lacks the corresponding ACL rules to effectively block the EID, resulting in security risks. The highlights of this study are as follows. First, a method is proposed for dynamically transferring ACL rules in LISP environments and frameworks. Second, the map‐register and map‐notify protocols were combined to encapsulate and transfer the ACL rules and thus obviate an additional process required to transfer these rules. Third, the experimental results verified that the proposed method can be used to achieve synchronized security protection in an LISP environment involving cross‐segment EID movements.  相似文献   

14.
为了提高ECA规则集可终止性分析的准确性,建立了一种可描述ECA规则集的扩展Petri网(EPN, extended Petri net)模型,在此基础上研究并提出了一种ECA规则集终止性判定算法。该算法充分利用EPN网所包含ECA规则特性的丰富信息,综合分析了ECA规则特性对规则集可终止性的影响。理论分析和实验结果表明,所提出的算法具有更高的准确性和更低的时间复杂度。  相似文献   

15.
Frequency of failure of a system with s-independent components can be obtained from the system availability (unavailability) expression and failure and repair rates of the components. Although, Grouped Variable Inversion is an efficient technique to find the system availability, there is no convenient method to convert the “availability expression obtained by this technique” into an “expression for system-failure frequency.” This paper present generic rules to find system-failure frequency, particularly, when the availability or unavailability expression of a system is obtained using this technique. The rules are straightforward, and produce appreciably shorter expressions for system-failure frequency. Examples illustrate the simplicity and efficiency of the proposed rules  相似文献   

16.
仲兆满  李存华  刘宗田  管燕 《电子学报》2014,42(12):2352-2358
本文针对多主题信息采集效率低下的问题,调研了主题规则在内置搜索引擎和通用搜索引擎上搜索结果的差异,提出将主题规则拆分成原子规则的思想,分析了原子规则间的相同、互换、包含三种关系.在原子规则之间关系的基础上,设计了针对内置搜索和通用搜索不同的原子规则分配策略,这样做一方面提高主题信息采集的准确率,另一方面减少搜索采集的次数.针对原子规则直接搜索结果的准确率不高的问题,提出了基于句群的主题与信息相关性的过滤方法.设置138条主题规则(拆分后的原子规则为8223条),14个内置搜索引擎和4个通用搜索引擎,在单位时间内采集到的信息总条数与采集到的相关信息的条数两个方面进行了实验比较.结果表明,所提方法在信息采集数目及相关信息采集数目方面均具有较好的性能.  相似文献   

17.
Aiming at the severe challenges of access control policy redundancy and conflict detection,the efficiency of access control policy evaluation in complex network environment,an attribute-based lightweight reconfigurable access control policy was proposed.Taking the attribute-based access control policy as an example,the attribute-based access control policy was divided into multiple disjoint atomic access control rules according to the operation type,subject attribute,object attribute,and environment attribute in the access control policy.Complex access control policies were constructed through atomic access control rules and an algebraic expression formed by AND,OR logical relationships.A method for redundancy and collision detection of atomic access control rules was proposed.A method was proposed for decompose a complex access control policy into equivalent atomic access control rules and an algebraic expression.The method for redundancy and collision detection of complex access control policies were proposed through redundancy and collision detection of equivalent atomic access control rules and algebraic expressions.From time complexity and space complexity,the efficiency of the equivalent transformation access control policy was evaluated.It showes that the reconstruction method for access control policy greatly reduces the number,size and complexity of access control policy,improves the efficiency of access control policy redundancy and collision detection,and the efficiency of access control evaluation.  相似文献   

18.
为克服模糊规则提取的盲目性和随机性,提出了一种基于新的自适应模糊C-均值聚类(AFCM)算法的T-S 模糊建模方法.首先利用减法聚类来确定聚类数目的上限和初始聚类中心,然后采用改进的模糊C-均值聚类(FCM).算法进一步优化聚类中心,最后通过聚类有效性评判方法自适应地确定规则数及聚类中心,同时改进的FCM算法也克服了野...  相似文献   

19.
Currently, an automated methodology based on association rules is presented for the detection of ischemic beats in long duration electrocardiographic (ECG) recordings. The proposed approach consists of three stages. 1) Preprocessing: Noise is removed and all the necessary ECG features are extracted. 2) Discretization: The continuous valued features are transformed to categorical. 3) Classification: An association rule extraction algorithm is utilized and a rule-based classification model is created. According to the proposed methodology, electrocardiogram (ECG) features extracted from the ST segment and the T-wave, as well as the patient's age, were used as inputs. The output was the classification of the beat as ischemic or not. Various algorithms were tested both for discretization and for classification using association rules. To evaluate the methodology, a cardiac beat dataset was constructed using several recordings of the European Society of Cardiology ST-T database. The obtained sensitivity (Se) and specificity (Sp) was 87% and 93%, respectively. The proposed methodology combines high accuracy with the ability to provide interpretation for the decisions made, since it is based on a set of association rules.  相似文献   

20.
An azimuth-frequency domain (AFD) geometric channel model aimed at establishing the fundamental propagation characteristics for an ultrawideband (UWB) time-invariant channel is proposed. This modeling approach envisages the spatial pattern of scatterer distribution in a hypothetical azimuth-frequency space. One of the main advantages of this approach is the availability of analytical expressions relating signal properties in the AFD to channel properties. A further virtue of this method is to exploit the geometric distribution of scatterers for different spectral components from a physical wave-propagation viewpoint. The workhorse is the wideband semi-geometrically based statistical model and three heuristic rules proposed, which are an extension of the rules presented in the previous work. These rules are so proposed as to provide the underlying connection between the canonical model and the physical channel it represents. The important channel properties such as power azimuthal spectrum (PAS) and power delay spectrum (PDS) are calculated using this model and compared with the published data in the existing literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号