共查询到19条相似文献,搜索用时 265 毫秒
1.
基于可执行路径分析的隐藏进程检测方法 总被引:1,自引:0,他引:1
韩芳 《计算机与数字工程》2009,37(1):115-117
研究了内核模式下进程隐藏的原理和进程隐藏检测技术。在此基础上,提出了一种Windows操作系统内核模式下基于可执行路径分析(EPA)的隐藏进程检测技术。通过检查某些关键系统函数执行时所用的指令个数,来判断这些函数是否执行了多余的代码,从而断定系统被Windows Rootkit修改过了。利用该方法,可以检测出当前常规安全检测工具不能发现的系统恶意程序的进程隐藏。 相似文献
2.
对现有的Windows Rootkit进程隐藏技术进行了研究,提出了基于交叉视图的Rootkit进程隐藏检测技术.该技术通过比较从操作系统的高层和底层获取到的进程列表来检测被Rootkit所隐藏的进程,其中,底层进程列表是通过搜索内存中的内核对象来获得的.实验表明,该技术具有较好的检测效果. 相似文献
3.
4.
基于内存扫描的隐藏进程检测技术 总被引:1,自引:1,他引:1
针对恶意代码普遍使用Rootkit技术来隐藏自身进程这一特点,提出了基于内存扫描的隐藏进程检测方法。该方法通过对系统高端虚拟内存的扫描,判断其中存在的Windows内核对象的类型,得到可信的系统进程信息,从而实现对隐藏进程的检测。同时,该检测方法可以实现对其他类型的Windows内核对象的扫描,具有一定的扩展性。 相似文献
5.
6.
7.
对现有的Windows下各种隐藏进程检测技术及其反检测技术进行了研究,提出了基于内存搜索的隐藏进程检测技术,并针对该技术的性能提出了改进。该种检测技术利用进程的固有特征对系统地址空间的遍历建立完整的进程列表来检测隐藏进程。通过实验表明,该技术具有较好的可靠性、检测效率和完整性。 相似文献
8.
对不支持可加载模块的系统内核入侵代码隐藏技术进行了研究.比较了内核支持可加载模块和内核不支持可加载模块的内核入侵的方法区别,阐述了内核入侵在求解系统调用表的地址、kmalloc函数的地址、编写函数分配内核空间内存、编写入侵代码、汇编代码处理、提取代码段及重定位信息、分配内核空间的内存、代码写入分配的内存等八个主要流程.在总结入侵代码隐藏技术原理的基础上,给出了入侵代码隐藏文件信息、进程信息和网络连接技术的详细设计实现. 相似文献
9.
内存取证是计算机取证的一个重要分支,而获取内存镜像文件中进程和线程信息是内存取证技术的重点和难点。基于微软最新操作系统平台Windows 8,研究其进程和线程的获取方法。运用逆向工程分析技术对Windows 8下进程和线程相关内核数据结构进行分析,提取出相应特征;基于这些特征,提出了一种能够从物理内存镜像文件中得到系统当前进程和线程信息的算法。实验结果和分析表明,该算法能够成功提取隐藏进程和非隐藏进程,及其各进程相关的线程信息,为内存取证分析提供了可靠的数据基础。 相似文献
10.
Rootkit是能够持久或可靠地存在于计算机系统上的一组程序或代码.为了达到无法检测的目的,Rootkit必须使用进程隐藏技术.Rootkit进程隐藏技术是一种以秘密方式在系统后台运行并窃取用户信息的技术.通过分析Windows平台下Rootkit进程隐藏技术的原理,研究了应用层和内核层两种模式下的Rootkit进程隐藏技术.针对Rootkit进程隐藏技术的特点,开发了一个基于句柄表三位一体交叉映射的Rootkit隐藏进程检测平台.系统测试表明,本平台能够检测出当前绝大部分主流Rootkit技术实现的隐藏进程,在实际应用中达到了较好的效果. 相似文献
11.
针对Linux系统在内存管理方面实时性支持不够的问题,设计一种提高Linux内存管理实时性的方案。从3个方面改进Linux系统内存管理的实时性,包括建立内存映射来减少用户态和内核态之间的模式转换,将内存锁定避免换页操作,改进系统原有的内存管理算法来消除内存操作的不确定性。改进后的内存管理算法基于分区管理和最佳适配的原理,时间复杂度为O(1)。实验结果表明,该方案可以提高Linux内存管理的时间性能,特别是在内存使用紧张的环境下效果更加明显,性能提高率可达49.5%,能够满足实时性的要求。 相似文献
12.
13.
目前,智能手机安全问题引起了人们高度的重视。木马作为一种隐蔽性、欺骗性很高的攻击手段,在该平台上不断蔓延,虽然受到广泛关注,但却没有很好的防范手段。在各种漏洞中,提权漏洞对于Android系统的安全威胁巨大,一旦攻击者有机会获得内核的内存地址,就能够通过关闭内核内存的写保护获得向内核内存写入恶意指令的权限,并实现下载木马病毒的目的。为应对这一漏洞,首先分析SEAndroid机制,并基于此机制提出一种新型的将内核加强和数据包过滤2种方法结合的提权漏洞防范模块,并通过实验对所提出的防范模块的有效性进行验证。 相似文献
14.
Operating system designers attempt to keep high CPU utilization by maintaining an optimal multiprogramming level (MPL). Although running more processes makes it less likely to leave the CPU idle, too many processes adversely incur serious memory competition, and even introduce thrashing, which eventually lowers CPU utilization. A common practice to address the problem is to lower the MPL with the aid of process swapping out/in operations. This approach is expensive and is only used when the system begins serious thrashing. The objective of our study is to provide highly responsive and cost‐effective thrashing protection by adaptively conducting priority page replacement in a timely manner. We have designed a dynamic system Thrashing Protection Facility (TPF) in the system kernel. Once TPF detects system thrashing, one of the active processes will be identified for protection. The identified process will have a short period of privilege in which it does not contribute its least recently used (LRU) pages for removal so that the process can quickly establish its working set, improving the CPU utilization. With the support of TPF, thrashing can be eliminated in its early stage by adaptive page replacement, so that process swapping will be avoided or delayed until it is truly necessary. We have implemented TPF in a current and representative Linux kernel running on an Intel Pentium machine. Compared with the original Linux page replacement, we showthat TPF consistently and significantly reduces page faults and the execution time of each individual job in several groups of interacting SPEC CPU2000 programs. We also show that TPF introduces little additional overhead to program executions, and its implementation in Linux (or Unix) systems is straightforward. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
15.
Kun WANG Song WU Shengbang LI Zhuo HUANG Hao FAN Chen YU Hai JIN 《Frontiers of Computer Science》2024,18(2):182102
Container-based virtualization is becoming increasingly popular in cloud computing due to its efficiency and flexibility. Resource isolation is a fundamental property of containers. Existing works have indicated weak resource isolation could cause significant performance degradation for containerized applications and enhanced resource isolation. However, current studies have almost not discussed the isolation problems of page cache which is a key resource for containers. Containers leverage memory cgroup to control page cache usage. Unfortunately, existing policy introduces two major problems in a container-based environment. First, containers can utilize more memory than limited by their cgroup, effectively breaking memory isolation. Second, the OS kernel has to evict page cache to make space for newly-arrived memory requests, slowing down containerized applications. This paper performs an empirical study of these problems and demonstrates the performance impacts on containerized applications. Then we propose pCache (precise control of page cache) to address the problems by dividing page cache into private and shared and controlling both kinds of page cache separately and precisely. To do so, pCache leverages two new technologies: fair account (f-account) and evict on demand (EoD). F-account splits the shared page cache charging based on per-container share to prevent containers from using memory for free, enhancing memory isolation. And EoD reduces unnecessary page cache evictions to avoid the performance impacts. The evaluation results demonstrate that our system can effectively enhance memory isolation for containers and achieve substantial performance improvement over the original page cache management policy. 相似文献
16.
17.
Most computer systems use a global page replacement policy based on the LRU principle to approximately select a Least Recently Used page for a replacement in the entire user memory space. During execution interactions, a memory page can be marked as LRU even when its program is conducting page faults. We define the LRU pages under such a condition as false LRU pages because these LRU pages are not produced by program memory reference delays, which is inconsistent with the LRU principle. False LRU pages can significantly increase page faults, even cause system thrashing. This poses a more serious risk in a large parallel systems with distributed memories because of the existence of coordination among processes running on individual node. In the case, the process thrashing in a single node or a small number of nodes could severely affect other nodes running coordinating processes, even crash the whole system. In this paper, we focus on how to improve the page replacement algorithm running on one node.
After a careful study on characterizing the memory usage and the thrashing behaviors in the multi-programming system using LRU replacement. we propose an LRU replacement alternative, called token-ordered LRU, to eliminate or reduce the unnecessary page faults by effectively ordering and scheduling memory space allocations. Compared with traditional thrashing protection mechanisms such as load control, our policy allows more processes to keep running to support synchronous distributed process computing. We have implemented the token-ordered LRU algorithm in a Linux kernel to show its effectiveness. 相似文献
18.
DASH is a distributed operating system kernel. Message-passing (MP) is used for local communication, and the MP system uses virtual memory ( VM) remapping instead of software memory copying for moving large amounts of data between virtual address spaces. Remapping eliminates a potential communication bottleneck and may increase the feasibility of moving services such as file services to the user level. Previous systems that have used VM remapping for message transfer, however, have suffered from high per-operation delay, limiting the use of the technique. The DASH design reduces this delay by restricting the generality of remapping: a fixed part of every space is reserved for remapping, and a page's virtual address does not change when it is moved between spaces. We measured the performance of the DASH kernel for Sun 3/50 workstations, on which memory can be copied at 3·9 MB/s. Using remapping, DASH can move large messages between user spaces at a rate of 39 MB/s if they are not referenced and 24·8 MB/s if each page is referenced. Furthermore, the per-operation delay is low, so VM remapping is beneficial even for messages containing only one page. To further understand the performance of the DASH MP system, we broke an MP operation into short code segments and timed them with microsecond precision. The results show the relative costs of data movement and the other components of MP operations, and allow us to evaluate several specific design decisions. 相似文献
19.
Although nonuniform memory access architecture provides better scalability for multicore systems, cores accessing memory on remote nodes take longer than those accessing on local nodes. Remote memory access accompanied by contention for internode interconnection degrades performance. Properly mapping threads to cores and data accessed to their nodes can substantially improve performance and energy efficiency. However, an operating system kernel's load-balancing activity may migrate threads across nodes, which thus messes up the thread mapping. Besides, subsequent data mapping behavior pays for the cost of page migration to reduce remote memory access. Once unsuitable threads are migrated, it is detrimental to system performance. This paper focuses on improving the kernel's internode load balancing on nonuniform memory access systems. We develop a memory-aware kernel mechanism and policies to reduce remote memory access incurred by internode thread migration. The Linux kernel's load balancing mechanism is modified to incorporate selection policies in the internode thread migration, and the kernel is modified to track the amount of memory used by each thread on each node. With this information, well-designed policies can then choose suitable threads for internode migration. The purpose is to avoid migrating a thread that might incur relatively more remote memory access and page migration. The experimental results show that with our mechanism and the proposed selection policies, the system performance is substantially increased when compared with the unmodified Linux kernel that does not consider memory usage and always migrates the first-fit thread in the runqueue that can be migrated to the target central processing unit. 相似文献