首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
We propose a new Web information extraction system. The outline of the system and the algorithm to extract information are explained in this paper. A typical Web page consists of multiple elements with different functionalities, such as main content, navigation panels, copyright and privacy notices and advertisements. Visitors to Web pages need only a little of the pages. A system to extract a piece of Web pages is needed, our system enables users to extract Web blocks only by setting clipping areas with their mouse. Web blocks are clickable image maps. Imaging and detecting hyperlink areas on client-side are used to generate image maps. The specialty of our system is that Web blocks perfect layouts and hyperlinks on the original Web pages. Users can access and manage their Web blocks via Evernote, which is a cloud storage system. And HTML snippets for Web blocks enable users to easily reuse Web contents on their own Web site.  相似文献   

2.
基于Ontology和EM方法的网页分类研究   总被引:1,自引:1,他引:1  
Works on abstracting semantic information from substantive pages of Web and their usage in search engine can lead to intelligent retrieval ,or other individual services. This paper mainly focuses on some research about analysis of Web page classification infor. Ontology as a base,using TFIDF word weights and Rocchio algorithm is combined with EM to improve accuracy of classifier. It's proved that this EM procedure works well on enhancing the veracity by the usage of unlabeled pages when the samples are limited.  相似文献   

3.
There are two important problems in online retail:1)The conflict between the different interest of all customers to the different commodities and the commodity classification structure of Web site;2)Many customers will simultaneously buy both the beer and the diaper that are classified in different classes and levels in the Web site,which is the typical problem in data mining.The two problems will make majority customers access overabundant Web pages.To sove these problems,we mine the Web page data,server data,and marketing data to build an adaptive model.In this model,the frequently purchased commodities and their association commodity sets that are discovered by the association rule discovery will be put into the suitable Web page according to the placing method and the backing off method.At last the navigation Web pages become the navigation content Web pages.The Web site can be adaptive according to the users‘‘‘‘‘‘‘‘accesa and purchase information.In online retail,the designers require to understand the latent users‘‘‘‘‘‘‘‘interest in order to convert the latent users to purchase users.In this paper,we give the approach to discover the Internet marketing intelligence through OLAP in order to help the designers to improve their service.  相似文献   

4.
Ji Rong  Li 《通讯和计算机》2013,(5):720-723
Optimal fuzzy-valued feature subset selection is a technique for fuzzy-valued feature subset selection. By viewing the imprecise feature values as fuzzy sets, the information it contains would not be lost compared with the traditional methods. The performance of classification depends directly on the quality of training corpus. In practical applications, noise examples are unavoidable in the training corpus and thus influence the effect of the classification approach. This paper presents an algorithm for eliminating the class noise based on the analysis of the representative class information of the examples. The representative class information can be acquired by mining the most classification ambiguity of feature values. The proposed algorithm is applied to fuzzy decision tree induction. The experimental results show that the algorithm can effectively reduce the introduction of noise examples and raise the accuracy of classification on the data sets with a high noise ratio.  相似文献   

5.
With the rapid increase of memory consumption by applications running on cloud data centers,we need more efficient memory management in a virtualized environment.Exploiting huge pages becomes more critical for a virtual machine's performance when it runs large working set size programs.Programs with large working set sizes are more sensitive to memory allocation,which requires us to quickly adjust the virtual machine's memory to accommodate memory phase changes.It would be much more efficient if we could adjust virtual machines'memory at the granularity of huge pages.However,existing virtual machine memory reallocation techniques,such as ballooning,do not support huge pages.In addition,in order to drive effective memory reallocation,we need to predict the actual memory demand of a virtual machine.We find that traditional memory demand estimation methods designed for regular pages cannot be simply ported to a system adopting huge pages.How to adjust the memory of virtual machines timely and effectively according to the periodic change of memory demand is another challenge we face.This paper proposes a dynamic huge page based memory balancing system(HPMBS)for efficient memory management in a virtualized environment.We first rebuild the ballooning mechanism in order to dispatch memory in the granularity of huge pages.We then design and implement a huge page working set size estimation mechanism which can accurately estimate a virtual machine's memory demand in huge pages environments.Combining these two mechanisms,we finally use an algorithm based on dynamic programming to achieve dynamic memory balancing.Experiments show that our system saves memory and improves overall system performance with low overhead.  相似文献   

6.
A site-based proxy cache   总被引:4,自引:0,他引:4       下载免费PDF全文
In traditional proxy caches,any visited page from any Web server is cached independently,ignoring connections between pages,And users still have to frequently visity in dexing pages just for reaching useful informative ones,which causes significant waste of caching space and unnecessary Web traffic.In order to solve the above problem,this paper introduced a site graph model to describe WWW and a site-based replacement strategy has been built based on it .The concept of “access frequency“ is developed for evaluating whether a Web page is worth being kept in caching space.On the basis of user‘‘‘‘‘‘‘‘s access history,auxiliary navigation information is provided to help him reach target pages more quickly.Performance test results haves shown that the proposed proxy cache system can get higher hit ratio than traditional ones and can reduce user‘‘‘‘‘‘‘‘s access latency effectively.  相似文献   

7.
There are hidden and rich information for data mining in the topology of topic-specific websites. A new topic-specific association rules mining algorithm is proposed to further the research on this area. The key idea is to analyze the frequent hyperlinked relati ons between pages of different topics. In the topic-specific area, if pages of onetopic are frequently hyperlinked by pages of another topic, we consider the two topics are relevant. Also, if pages oftwo different topics are frequently hyperlinked together by pages of the other topic, we consider the two topics are relevant.The initial experiments show that this algorithm performs quite well while guiding the topic-specific crawling agent and it can be applied to the further discovery and mining on the topic-specific website.  相似文献   

8.
Data extraction from the web based on pre-defined schema   总被引:8,自引:1,他引:7       下载免费PDF全文
With the development of the Internet,the World Web has become an invaluable information source for most organizations,However,most documents available from the Web are in HTML form which is originally designed for document formatting with little consideration of its contents.Effectively extracting data from such documents remains a non-trivial task.In this paper,we present a schema-guided approach to extracting data from HTML pages .Under the approach,the user defines a schema specifying what to be extracted and provides sample mappings between the schema and th HTML page.The system will induce the mapping rules and generate a wrapper that takes the HTML page as input and produces the required datas in the form of XML conforming to the use-defined schema .A prototype system implementing the approach has been developed .The preliminary experiments indicate that the proposed semi-automatic approach is not only easy to use but also able to produce a wrapper that extracts required data from inputted pages with high accuracy.  相似文献   

9.
Web hyperlink structure analysis algorithm plays a significant role in improving the precision of Web information retrieval. Current link algorithms employ iteration function to compute the Web resource weight. The major drawback of this approach is that every Web document has a fixed rank which is independent of Web queries. This paper proposes an improved algorithm that ranks the quality and the relevance of a page according to users' query dynamically. The experiments show that the current link analysis algorithm is improved.  相似文献   

10.
Algorithms for numeric data classification have been applied for text classification. Usually the vector space model is used to represent text collections. The characteristics of this representation such as sparsity and high dimensionality sometimes impair the quality of general-purpose classifiers. Networks can be used to represent text collections, avoiding the high sparsity and allowing to model relationships among different objects that compose a text collection. Such network- based representations can improve the quality of the classification results. One of the simplest ways to represent textual collections by a network is through a bipartite heterogeneous network, which is composed of objects that represent the documents connected to objects that represent the terms. Heterogeneous bipartite networks do not require computation of similarities or relations among the objects and can be used to model any type of text collection. Due to the advantages of representing text collections through bipartite heterogeneous networks, in this article we present a text classifier which builds a classification model using the structure of a bipartite heterogeneous network. Such an algorithm, referred to as IMBHN (Inductive Model Based on Bipartite Heterogeneous Network), induces a classification model assigning weights to objects that represent the terms for each class of the text collection. An empirical evaluation using a large amount of text collections from different domains shows that the proposed IMBHN algorithm produces significantly better results than k-NN, C4.5, SVM, and Naive Bayes algorithms.  相似文献   

11.
Information service plays a key role in grid system, handles resource discovery and management process. Employing existing information service architectures suffers from poor scalability, long search response time, and large traffic overhead. In this paper, we propose a service club mechanism, called S-Club, for efficient service discovery. In S-Club, an overlay based on existing Grid Information Service (GIS) mesh network of CROWN is built, so that GISs are organized as service clubs. Each club serves for a certain type of service while each GIS may join one or more clubs. S-Club is adopted in our CROWN Grid and the performance of S-Club is evaluated by comprehensive simulations. The results show that S-Club scheme significantly improves search performance and outperforms existing approaches. Chunming Hu is a research staff in the Institute of Advanced Computing Technology at the School of Computer Science and Engineering, Beihang University, Beijing, China. He received his B.E. and M.E. in Department of Computer Science and Engineering in Beihang University. He received the Ph.D. degree in School of Computer Science and Engineering of Beihang University, Beijing, China, 2005. His research interests include peer-to-peer and grid computing; distributed systems and software architectures. Yanmin Zhu is a Ph.D. candidate in the Department of Computer Science, Hong Kong University of Science and Technology. He received his B.S. degree in computer science from Xi’an Jiaotong University, Xi’an, China, in 2002. His research interests include grid computing, peer-to-peer networking, pervasive computing and sensor networks. He is a member of the IEEE and the IEEE Computer Society. Jinpeng Huai is a Professor and Vice President of Beihang University. He serves on the Steering Committee for Advanced Computing Technology Subject, the National High-Tech Program (863) as Chief Scientist. He is a member of the Consulting Committee of the Central Government’s Information Office, and Chairman of the Expert Committee in both the National e-Government Engineering Taskforce and the National e-Government Standard office. Dr. Huai and his colleagues are leading the key projects in e-Science of the National Science Foundation of China (NSFC) and Sino-UK. He has authored over 100 papers. His research interests include middleware, peer-to-peer (P2P), grid computing, trustworthiness and security. Yunhao Liu received his B.S. degree in Automation Department from Tsinghua University, China, in 1995, and an M.A. degree in Beijing Foreign Studies University, China, in 1997, and an M.S. and a Ph.D. degree in computer science and engineering at Michigan State University in 2003 and 2004, respectively. He is now an assistant professor in the Department of Computer Science and Engineering at Hong Kong University of Science and Technology. His research interests include peer-to-peer computing, pervasive computing, distributed systems, network security, grid computing, and high-speed networking. He is a senior member of the IEEE Computer Society. Lionel M. Ni is chair professor and head of the Computer Science and Engineering Department at Hong Kong University of Science and Technology. Lionel M. Ni received the Ph.D. degree in electrical and computer engineering from Purdue University, West Lafayette, Indiana, in 1980. He was a professor of computer science and engineering at Michigan State University from 1981 to 2003, where he received the Distinguished Faculty Award in 1994. His research interests include parallel architectures, distributed systems, high-speed networks, and pervasive computing. A fellow of the IEEE and the IEEE Computer Society, he has chaired many professional conferences and has received a number of awards for authoring outstanding papers.  相似文献   

12.
A range query finds the aggregated values over all selected cells of an online analytical processing (OLAP) data cube where the selection is specified by the ranges of contiguous values for each dimension. An important issue in reality is how to preserve the confidential information in individual data cells while still providing an accurate estimation of the original aggregated values for range queries. In this paper, we propose an effective solution, called the zero-sum method, to this problem. We derive theoretical formulas to analyse the performance of our method. Empirical experiments are also carried out by using analytical processing benchmark (APB) dataset from the OLAP Council. Various parameters, such as the privacy factor and the accuracy factor, have been considered and tested in the experiments. Finally, our experimental results show that there is a trade-off between privacy preservation and range query accuracy, and the zero-sum method has fulfilled three design goals: security, accuracy, and accessibility. Sam Y. Sung is an Associate Professor in the Department of Computer Science, School of Computing, National University of Singapore. He received a B.Sc. from the National Taiwan University in 1973, the M.Sc. and Ph.D. in computer science from the University of Minnesota in 1977 and 1983, respectively. He was with the University of Oklahoma and University of Memphis in the United States before joining the National University of Singapore. His research interests include information retrieval, data mining, pictorial databases and mobile computing. He has published more than 80 papers in various conferences and journals, including IEEE Transaction on Software Engineering, IEEE Transaction on Knowledge & Data Engineering, etc. Yao Liu received the B.E. degree in computer science and technology from Peking University in 1996 and the MS. degree from the Software Institute of the Chinese Science Academy in 1999. Currently, she is a Ph.D. candidate in the Department of Computer Science at the National University of Singapore. Her research interests include data warehousing, database security, data mining and high-speed networking. Hui Xiong received the B.E. degree in Automation from the University of Science and Technology of China, Hefei, China, in 1995, the M.S. degree in Computer Science from the National University of Singapore, Singapore, in 2000, and the Ph.D. degree in Computer Science from the University of Minnesota, Minneapolis, MN, USA, in 2005. He is currently an Assistant Professor of Computer Information Systems in the Management Science & Information Systems Department at Rutgers University, NJ, USA. His research interests include data mining, databases, and statistical computing with applications in bioinformatics, database security, and self-managing systems. He is a member of the IEEE Computer Society and the ACM. Peter A. Ng is currently the Chairperson and Professor of Computer Science at the University of Texas—Pan American. He received his Ph.D. from the University of Texas–Austin in 1974. Previously, he had served as the Vice President at the Fudan International Institute for Information Science and Technology, Shanghai, China, from 1999 to 2002, and the Executive Director for the Global e-Learning Project at the University of Nebraska at Omaha, 2000–2003. He was appointed as an Advisory Professor of Computer Science at Fudan University, Shanghai, China in 1999. His recent research focuses on document and information-based processing, retrieval and management. He has published many journal and conference articles in this area. He had served as the Editor-in-Chief for the Journal on Systems Integration (1991–2001) and as Advisory Editor for the Data and Knowledge Engineering Journal since 1989.  相似文献   

13.
Web image indexing by using associated texts   总被引:1,自引:0,他引:1  
In order to index Web images, the whole associated texts are partitioned into a sequence of text blocks, then the local relevance of a term to the corresponding image is calculated with respect to both its local occurrence in the block and the distance of the block to the image. Thus, the overall relevance of a term is determined as the sum of all its local weight values multiplied by the corresponding distance factors of the text blocks. In the present approach, the associated text of a Web image is firstly partitioned into three parts, including a page-oriented text (TM), a link-oriented text (LT), and a caption-oriented text (BT). Since the big size and semantic divergence, the caption-oriented text is further partitioned into finer blocks based on the tree structure of the tag elements within the BT text. During the processing, all heading nodes are pulled up in order to correlate with their semantic scopes, and a collapse algorithm is also exploited to remove the empty blocks. In our system, the relevant factors of the text blocks are determined by using a greedy Two-Way-Merging algorithm. Zhiguo Gong is an associate Professor in the Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macao, China. He received his BS, MS, and PhD from the Hebei Normal University, Peking University, and the Chinese Academy of Science in 1983, 1988, and 1998, respectively. His research interests include Distributed Database, Multimedia Database, Digital Library, Web Information Retrieval, and Web Mining. Leong Hou U is currently a Master Candidate in the Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macao, China. He received his BS from National Chi Nan University, Taiwan in 2003. His research interests include Web Information Retrieval and Web Mining. Chan Wa Cheang is currently a Master Candidate in the Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macao, China. He received his BS from the National Taiwan University, Taiwan in 2003. His research interests include Web Information Retrieval and Web Mining.  相似文献   

14.
A Web information visualization method based on the document set-wise processing is proposed to find the topic stream from a sequence of document sets. Although the hugeness as well as its dynamic nature of the Web is burden for the users, it will also bring them a chance for business and research if they can notice the trends or movement of the real world from the Web. A sequence of document sets found on the Web, such as online news article sets is focused on in this paper. The proposed method employs the immune network model, in which the property of memory cell is used to find the topical relation among document sets. After several types of memory cell models are proposed and evaluated, the experimental results show that the proposed method with memory cell can find more topic streams than that without memory cell. Yasufumi Takama, D.Eng.: He received his B.S., M.S. and Dr.Eng. degrees from the University of Tokyo in 1994, 1996, and 1999, respectively. From 1999 to 2002 he was with Tokyo Institute of Technology, Japan. Since 2002, he has been Associate Professor of Department of Electronic Systems and Engineering, Tokyo Metropolitan Institute of Technology, Tokyo, Japan. He has also been participating in JST (Japan Science and Technology Corporation) since October 2000. His current research interests include artificial intelligence, Web information retrieval and visualization systems, and artificial immune systems. He is a member of JSAI (Japanese Society of Artificial Intelligence), IPS J (Information Processing Society of Japan), and SOFT (Japan Society for Fuzzy Theory and Systems). Kaoru Hirota, D.Eng.: He received his B.E., M.E. and Dr.Eng. degrees in electronics from Tokyo Institute of Technology, Tokyo, Japan, in 1974, 1976, and 1979, respectively. From 1979 to 1982 and from 1982 to 1995 he was with the Sagami Institute of Technology and Hosei University, respectively. Since 1995, he has been with the Interdisciplinary Graduate School of Science and Technology, Tokyo Institute of Technology, Yokohama, Japan. He is now a department head professor of Department of Computational Intelligence and Systems Science. Dr.Hirota is a member of IFSA (International Fuzzy Systems Association (Vice President 1991–1993), Treasurer 1997–2001), IEEE (Associate Editors of IEEE Transactions on Fuzzy Systems (1993–1995) and IEEE Transactions on Industrial Electronics (1996–2000)) and SOFT (Japan Society for Fuzzy Theory and Systems (Vice President 1995–1997, President 2001–2003)), and he is an editor in chief of Int. J. of Advanced Computational Intelligence.  相似文献   

15.
In this paper, we present a new method for fuzzy risk analysis based on the ranking of generalized trapezoidal fuzzy numbers. The proposed method considers the centroid points and the standard deviations of generalized trapezoidal fuzzy numbers for ranking generalized trapezoidal fuzzy numbers. We also use an example to compare the ranking results of the proposed method with the existing centroid-index ranking methods. The proposed ranking method can overcome the drawbacks of the existing centroid-index ranking methods. Based on the proposed ranking method, we also present an algorithm to deal with fuzzy risk analysis problems. The proposed fuzzy risk analysis algorithm can overcome the drawbacks of the one we presented in [7]. Shi-Jay Chen was born in 1972, in Taipei, Taiwan, Republic of China. He received the B.S. degree in information management from the Kaohsiung Polytechnic Institute, Kaohsiung, Taiwan, and the M.S. degree in information management from the Chaoyang University of Technology, Taichung, Taiwan, in 1997 and 1999, respectively. He received the Ph.D. degree at the Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, in October 2004. His research interests include fuzzy systems, multicriteria fuzzy decisionmaking, and artificial intelligence. Shyi-Ming Chen was born on January 16, 1960, in Taipei, Taiwan, Republic of China. He received the Ph.D. degree in Electrical Engineering from National Taiwan University, Taipei, Taiwan, in June 1991. From August 1987 to July 1989 and from August 1990 to July 1991, he was with the Department of Electronic Engineering, Fu-Jen University, Taipei, Taiwan. From August 1991 to July 1996, he was an Associate Professor in the Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan. From August 1996 to July 1998, he was a Professor in the Department of Computer and Information Science, National Chiao Tung University, Hsinchu, Taiwan. From August 1998 to July 2001, he was a Professor in the Department of Electronic Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan. Since August 2001, he has been a Professor in the Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan. He was a Visiting Scholar in the Department of Electrical Engineering and Computer Science, University of California, Berkeley, in 1999. He was a Visiting Scholar in the Institute of Information Science, Academia Sinica, Republic of China, in 2003. He has published more than 250 papers in referred journals, conference proceedings and book chapters. His research interests include fuzzy systems, information retrieval, knowledge-based systems, artificial intelligence, neural networks, data mining, and genetic algorithms. Dr. Chen has received several honors and awards, including the 1994 Outstanding Paper Award o f the Journal of Information and Education, the 1995 Outstanding Paper Award of the Computer Society of the Republic of China, the 1995 and 1996 Acer Dragon Thesis Awards for Outstanding M.S. Thesis Supervision, the 1995 Xerox Foundation Award for Outstanding M.S. Thesis Supervision, the 1996 Chinese Institute of Electrical Engineering Award for Outstanding M.S. Thesis Supervision, the 1997 National Science Council Award, Republic of China, for Outstanding Undergraduate Student's Project Supervision, the 1997 Outstanding Youth Electrical Engineer Award of the Chinese Institute of Electrical Engineering, Republic of China, the Best Paper Award of the 1999 National Computer Symposium, Republic of China, the 1999 Outstanding Paper Award of the Computer Society of the Republic of China, the 2001 Institute of Information and Computing Machinery Thesis Award for Outstanding M.S. Thesis Supervision, the 2001 Outstanding Talented Person Award, Republic of China, for the contributions in Information Technology, the 2002 Institute of information and Computing Machinery Thesis Award for Outstanding M.S. Thesis Supervision, the Outstanding Electrical Engineering Professor Award granted by the Chinese Institute of Electrical Engineering (CIEE), Republic of China, the 2002 Chinese Fuzzy Systems Association Best Thesis Award for Outstanding M.S. Thesis Supervision, the 2003 Outstanding Paper Award of the Technological and Vocational Education Society, Republic of China, the 2003 Acer Dragon Thesis Award for Outstanding Ph.D. Dissertation Supervision, the 2005 “Operations Research Society of Taiwan” Award for Outstanding M.S. Thesis Supervision, the 2005 Acer Dragon Thesis Award for Outstanding Ph.D. Dissertation Supervision, the 2005 Taiwan Fuzzy Systems Association Award for Outstanding Ph.D. Dissertation Supervision, and the 2006 “Operations Research Society of Taiwan” Award for Outstanding M.S. Thesis Supervision. Dr. Chen is currently the President of the Taiwanese Association for Artificial Intelligence (TAAI). He is a Senior Member of the IEEE, a member of the ACM, the International Fuzzy Systems Association (IFSA), and the Phi Tau Phi Scholastic Honor Society. He was an administrative committee member of the Chinese Fuzzy Systems Association (CFSA) from 1998 to 2004. He is an Associate Editor of the IEEE Transactions on Systems, Man, and Cybernetics - Part C, an Associate Editor of the IEEE Computational Intelligence Magazine, an Associate Editor of the Journal of Intelligent & Fuzzy Systems, an Editorial Board Member of the International Journal of Applied Intelligence, an Editor of the New Mathematics and Natural Computation Journal, an Associate Editor of the International Journal of Fuzzy Systems, an Editorial Board Member of the International Journal of Information and Communication Technology, an Editorial Board Member of the WSEAS Transactions on Systems, an Editor of the Journal of Advanced Computational Intelligence and Intelligent Informatics, an Associate Editor of the WSEAS Transactions on Computers, an Editorial Board Member of the International Journal of Computational Intelligence and Applications, an Editorial Board Member of the Advances in Fuzzy Sets and Systems Journal, an Editor of the International Journal of Soft Computing, an Editor of the Asian Journal of Information Technology, an Editorial Board Member of the International Journal of Intelligence Systems Technologies and Applications, an Editor of the Asian Journal of Information Management, an Associate Editor of the International Journal of Innovative Computing, Information and Control, and an Editorial Board Member of the International Journal of Computer Applications in Technology. He was an Editor of the Journal of the Chinese Grey System Association from 1998 to 2003. He is listed in International Who's Who of Professionals, Marquis Who's Who in the World, and Marquis Who's Who in Science and Engineering.  相似文献   

16.
In this paper,a noverl technique adopted in HarkMan is introduced.HarkMan is a keywore-spotter designed to automatically spot the given words of a vocabulary-independent task in unconstrained Chinese telephone speech.The speaking manner and the number of keywords are not limited.This paper focuses on the novel technique which addresses acoustic modeling,keyword spotting network,search strategies,robustness,and rejection.The underlying technologies used in HarkMan given in this paper are useful not only for keyword spotting but also for continuous speech recognition.The system has achieved a figure-of-merit value over 90%.  相似文献   

17.
Classification is an important technique in data mining.The decision trees builty by most of the existing classification algorithms commonly feature over-branching,which will lead to poor efficiency in the subsequent classification period.In this paper,we present a new value-oriented classification method,which aims at building accurately proper-sized decision trees while reducing over-branching as much as possible,based on the concepts of frequent-pattern-node and exceptive-child-node.The experiments show that while using relevant anal-ysis as pre-processing ,our classification method,without loss of accuracy,can eliminate the over-branching greatly in decision trees more effectively and efficiently than other algorithms do.  相似文献   

18.
The simple least-significant-bit (LSB) substitution technique is the easiest way to embed secret data in the host image. To avoid image degradation of the simple LSB substitution technique, Wang et al. proposed a method using the substitution table to process image hiding. Later, Thien and Lin employed the modulus function to solve the same problem. In this paper, the proposed scheme combines the modulus function and the optimal substitution table to improve the quality of the stego-image. Experimental results show that our method can achieve better quality of the stego-image than Thien and Lin’s method does. The text was submitted by the authors in English. Chin-Shiang Chan received his BS degree in Computer Science in 1999 from the National Cheng Chi University, Taipei, Taiwan and the MS degree in Computer Science and Information Engineering in 2001 from the National Chung Cheng University, ChiaYi, Taiwan. He is currently a Ph.D. student in Computer Science and Information Engineering at the National Chung Cheng University, Chiayi, Taiwan. His research fields are image hiding and image compression. Chin-Chen Chang received his BS degree in applied mathematics in 1977 and his MS degree in computer and decision sciences in 1979, both from the National Tsing Hua University, Hsinchu, Taiwan. He received his Ph.D. in computer engineering in 1982 from the National Chiao Tung University, Hsinchu, Taiwan. During the academic years of 1980–1983, he was on the faculty of the Department of Computer Engineering at the National Chiao Tung University. From 1983–1989, he was on the faculty of the Institute of Applied Mathematics, National Chung Hsing University, Taichung, Taiwan. From 1989 to 2004, he has worked as a professor in the Institute of Computer Science and Information Engineering at National Chung Cheng University, Chiayi, Taiwan. Since 2005, he has worked as a professor in the Department of Information Engineering and Computer Science at Feng Chia University, Taichung, Taiwan. Dr. Chang is a Fellow of IEEE, a Fellow of IEE and a member of the Chinese Language Computer Society, the Chinese Institute of Engineers of the Republic of China, and the Phi Tau Phi Society of the Republic of China. His research interests include computer cryptography, data engineering, and image compression. Yu-Chen Hu received his Ph.D. degree in Computer Science and Information Engineering from the Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi, Taiwan in 1999. Dr. Hu is currently an assistant professor in the Department of Computer Science and Information Engineering, Providence University, Sha-Lu, Taiwan. He is a member of the SPIE society and a member of the IEEE society. He is also a member of the Phi Tau Phi Society of the Republic of China. His research interests include image and data compression, information hiding, and image processing.  相似文献   

19.
In this paper, we shall propose a method to hide a halftone secret image into two other camouflaged halftone images. In our method, we adjust the gray-level image pixel value to fit the pixel values of the secret image and two camouflaged images. Then, we use the halftone technique to transform the secret image into a secret halftone image. After that, we make two camouflaged halftone images at the same time out of the two camouflaged images and the secret halftone image. After overlaying the two camouflaged halftone images, the secret halftone image can be revealed by using our eyes. The experimental results included in this paper show that our method is very practicable. The text was submitted by the authors in English. Wei-Liang Tai received his BS degree in Computer Science in 2002 from Tamkang University, Tamsui, Taiwan, and his MS degree in Computer Science and Information Engineering in 2004 from National Chung Cheng University, Chiayi, Taiwan. He is currently a PhD student of Computer Science and Information Engineering at National Chung Cheng University. His research fields are image hiding, digital watermarking, and image compression. Chi-Shiang Chan received his BS degree in Computer Science in 1999 from National Cheng Chi University, Taipei, Taiwan, and his MS degree in Computer Science and Information Engineering in 2001 from National Chung Cheng University, Chiayi, Taiwan. He is currently a PhD student of Computer Science and Information Engineering at National Chung Cheng University. His research fields are image hiding and image compression. Chin-Chen Chang received his BS degree in Applied Mathematics in 1977 and his MS degree in Computer and Decision Sciences in 1979, both from National Tsing Hua University, Hsinchu, Taiwan. He received his PhD in Computer Engineering in 1982 from National Chiao Tung University, Hsinchu, Taiwan. During the academic years of 1980–1983, he was on the faculty of the Department of Computer Engineering at National Chiao Tung University. From 1983–1989, he was on the faculty of the Institute of Applied Mathematics, National Chung Hsing University, Taichung, Taiwan. From 1989 to 2004, he has worked as a professor in the Institute of Computer Science and Information Engineering at National Chung Cheng University, Chiayi, Taiwan. Since 2005, he has worked as a professor in the Department of Information Engineering and Computer Science at Feng Chia University, Taichung, Taiwan. Dr. Chang is a fellow of the IEEE, a fellow of the IEE, and a member of the Chinese Language Computer Society, the Chinese Institute of Engineers of the Republic of China, and the Phi Tau Phi Society of the Republic of China. His research interests include computer cryptography, data engineering, and image compression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号