首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A big hole in a big wall. The most interesting vulnerability is certainly the vulnerability in the world's most widely deployed commercial firewall, Check Point Firewall-1, which can be exploited to gain control of the firewall.  相似文献   

2.
Two serious flaws in popular client firewall software have been uncovered in February  相似文献   

3.
4.
5.

Zero-day attacks and unpatched flaws

A vulnerability discovered in Internet Explorer at the beginning of December 2003 was an early Xmas present to scammers, who could exploit it to lure unsuspecting users into disclosing credit card information via an attack known as “phishing”. It didn't take long for the Web bandits to try out their present and it has yet to be taken away from them…  相似文献   

6.
More and more vulnerabilities appear to be exploited at a more rapid speed than they used to. During the last two months, we have seen exploitation of the ICQ hole in certain ISS products and the LSASS and PCT vulnerabilities in Microsoft Windows. All were exploited within very short time after information about the vulnerabilities was published.  相似文献   

7.
The big question     
我们向九位创意者提出了这样一个问题:“哪件印刷设计对你的工作启发最大?”  相似文献   

8.
Nicolle  Lindsay 《ITNOW》1998,40(6):24-26
  相似文献   

9.
10.
11.
12.
Lewis  T. 《Computer》1996,29(3):12-14
If a technology (or idea) does not achieve mainstream status quickly enough, it dies. Video on demand (interactive TV), the information superhighway (ISDN), and massively parallel supercomputing may be examples. These ideas are okay, but they could die for lack of legs. At present, consumers are simply shunning them, illustrating the power of Information Age mainstreaming. A corollary to this law is that a technology (or idea) thrives, even if it is a bad technology or idea, as long as it quickly achieves mainstream status. Microsoft Windows, Java, C++ and others illustrate the overwhelming power of mainstreaming. It's positive feedback. Simply put, the rich get richer, especially when they hold a monopoly. In the Information Age, the definition of wealth includes domination of standards as well as having cash in the bank. The problem with software is that software companies don't get paid unless they reap a profit within the time limit set by the mainstreaming law. Commercial software companies have to hit the big time, or else  相似文献   

13.
Big bang–big crunch (BBBC) algorithm is a fairly novel gradient-free optimisation algorithm. It is based on theories of evolution of the universe, namely the big bang and big crunch theory. The big challenge in BBBC is that it is easily trapped in local optima. In this paper, chaotic-based strategies are incorporated into BBBC to tackle this challenge. Five various chaotic-based BBBC strategies with three different chaotic map functions are investigated and the best one is selected as the proposed chaotic strategy for BBBC. The results of applying the proposed chaotic BBBC to different unimodal and multimodal benchmark functions vividly show that chaotic-based BBBC yields quality solutions. It significantly outperforms conventional BBBC, cuckoo search optimisation and gravitational search algorithm.  相似文献   

14.
Big data has been considered to be a breakthrough technological development over recent years. Notwithstanding, we have as yet limited understanding of how organizations translate its potential into actual social and economic value. We conduct an in-depth systematic review of IS literature on the topic and identify six debates central to how organizations realize value from big data, at different levels of analysis. Based on this review, we identify two socio-technical features of big data that influence value realization: portability and interconnectivity. We argue that, in practice, organizations need to continuously realign work practices, organizational models, and stakeholder interests in order to reap the benefits from big data. We synthesize the findings by means of an integrated model.  相似文献   

15.
At present, the development of health care industry is also very vigorous and prosperous, and has become one of the most widely developed industries in the world. Medical centers and service centers in various regions have begun to transform from medical model to health care model. This field programmable gate array has great advantages in this respect, and it is also one of the principles of patient-centered nursing. With the vigorous development of machine learning, its application scope is more and more extensive, and its application in medicine is also very common. People use machine learning to process big data in the medical field. In order to better manage patient data and realize patient-centered, we must analyze a large number of health data. The traditional management tools are not enough to support the analysis of modern data. Therefore, we should use advanced big data processing technology for relevant data processing, and use updated tools to meet the current medical needs. The signal processing based big data evaluation is to be done through FPGA. The proposed system contains three process these process are executed through the machine learning based. The first process preprocessing is used eliminate the noise of the image or irrelevant data avoided. The second process feature selection based decision tree technique used and then after the final process classification stage based machine learning technique is used to analysis of the big data accuracy level. FPGA based machine technique used to achieve the better result of the proposed system.  相似文献   

16.
Trends in big data analytics   总被引:1,自引:0,他引:1  
One of the major applications of future generation parallel and distributed systems is in big-data analytics. Data repositories for such applications currently exceed exabytes and are rapidly increasing in size. Beyond their sheer magnitude, these datasets and associated applications’ considerations pose significant challenges for method and software development. Datasets are often distributed and their size and privacy considerations warrant distributed techniques. Data often resides on platforms with widely varying computational and network capabilities. Considerations of fault-tolerance, security, and access control are critical in many applications (Dean and Ghemawat, 2004; Apache hadoop). Analysis tasks often have hard deadlines, and data quality is a major concern in yet other applications. For most emerging applications, data-driven models and methods, capable of operating at scale, are as-yet unknown. Even when known methods can be scaled, validation of results is a major issue. Characteristics of hardware platforms and the software stack fundamentally impact data analytics. In this article, we provide an overview of the state-of-the-art and focus on emerging trends to highlight the hardware, software, and application landscape of big-data analytics.  相似文献   

17.
18.
《Computer》2002,35(6):25-25
Researchers from Hewlett-Packard and the University of California, Los Angeles, have received a patent for a technology they claim will turn out nanochips that could be used to build very small, very powerful computers. The patent was for an electrochemical process that would let designers divide a nanochip into different sets of subcircuitry, each of which could function independently and conduct a different set of calculations. This would let the chip handle multiple functions as efficiently as traditional integrated circuits  相似文献   

19.
Lewis  T. 《Computer》1995,28(6):6-7
Someone once told the author that toy manufacturers had greater influence over computer design than computer scientists and this is true-the really big bucks are in consumer software products (about $90 billion per year), not information processing ($30-40 billion), client/server software (a few billion), scientific computing (maybe $2 billion), or multimedia (tens of billions). The author discusses the reasons for this and makes some predictions about the future of the computing industry  相似文献   

20.
So far, the data anonymization approaches based on k-anonymity and l-diversity has contributed much to privacy protection from record and attributes linkage attacks. However, the existing solutions are not efficient when applied to multimedia Big Data anonymization. This paper analyzes this problem in detail in terms of the processing time, memory space, and usability, and presents two schemes to overcome such inefficiency. The first one is to reduce the processing time and space by minimizing the temporary buffer usage during anonymization process. The second is to construct an early taxonomy during the database design. The idea behind this approach is that database designers should take preliminary actions for anonymization during the early stage of a database design to alleviate the burden placed on data publishers. To evaluate the effectiveness and feasibility of these schemes, specific application tools based on the proposed approaches were implemented and experiments were conducted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号