首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ling J  Van Schaik P 《Ergonomics》2004,47(8):907-921
Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.  相似文献   

2.
《Ergonomics》2012,55(8):907-921
Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.  相似文献   

3.
The Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: taken as a whole, the set of Web pages lacks a unifying structure and shows far more authoring style and content variation than that seen in traditional text document collections. This level of complexity makes an “off-the-shelf” database management and information retrieval solution impossible. To date, index based search engines for the Web have been the primary tool by which users search for information. Such engines can build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained key words and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. How then, from this sea of pages, should a search engine select the correct ones-those of most value to the user? Clever is a search engine that analyzes hyperlinks to uncover two types of pages: authorities, which provide the best source of information on a given topic; and hubs, which provide collections of links to authorities. We outline the thinking that went into Clever's design, report briefly on a study that compared Clever's performance to that of Yahoo and AltaVista, and examine how our system is being extended and updated  相似文献   

4.
With the unstoppable growth of the world wide Web, the great success of Web search engines, such as Google and AltaVista, users now turn to the Web whenever looking for information. However, many users are neophytes when it comes to computer science, yet they are often specialists of a certain domain. These users would like to add more semantics to guide their search through world wide Web material, whereas currently most search features are based on raw lexical content. We show how the use of the incoming links of a page can be used efficiently to classify a page in a concise manner. This enhances the browsing and querying of Web pages. We focus on the tools needed in order to manage the links and their semantics. We further process these links using a hierarchy of concepts, akin to an ontology, and a thesaurus. This work is demonstrated by an prototype system, called THESUS, that organizes thematic Web documents into semantic clusters. Our contributions are the following: 1) a model and language to exploit link semantics information, 2) the THESUS prototype system, 3) its innovative aspects and algorithms, more specifically, the novel similarity measure between Web documents applied to different clustering schemes (DB-Scan and COBWEB), and 4) a thorough experimental evaluation proving the value of our approach.  相似文献   

5.
Determining the titles of Web pages is an important element in characterizing and categorizing the vast number of Web pages. There are a few approaches to automatically determining the titles of Web pages. As an R&D project for Naver, the operator of Naver (Korea’s largest portal site), we developed a new method that makes use of anchor texts and analysis of links among Web pages. In this paper, we describe our method and show experiment results of its performance.  相似文献   

6.
Indexing the Web is becoming a laborious task for search engines as the Web exponentially grows in size and distribution. Presently, the most effective known approach to overcome this problem is the use of focused crawlers. A focused crawler employs a significant and unique algorithm in order to detect the pages on the Web that relate to its topic of interest. For this purpose we proposed a custom method that uses specific HTML elements of a page to predict the topical focus of all the pages that have an unvisited link within the current page. These recognized on-topic pages have to be sorted later based on their relevance to the main topic of the crawler for further actual downloads. In the Treasure-Crawler, we use a hierarchical structure called T-Graph which is an exemplary guide to assign appropriate priority score to each unvisited link. These URLs will later be downloaded based on this priority. This paper embodies the implementation, test results and performance evaluation of the Treasure-Crawler system. The Treasure-Crawler is evaluated in terms of specific information retrieval criteria such as recall and precision, both with values close to 50%. Gaining such outcome asserts the significance of the proposed approach.  相似文献   

7.
The effects of divided attention were examined in younger adults (M = 23 years) and older adults (M = 64 years) who searched for traffic signs in digitized images of traffic scenes. Sign search was executed under single-task and dual-task conditions in scenes containing either small or large amounts of visual clutter. For both age groups, clutter and the secondary task had additive effects on search accuracy, speed, and oculomotor involvement. Compared with the younger adults, older adults were less accurate, especially with high-clutter scenes, were slower to decide that a target sign was not present, and exhibited a marginally greater divided-attention effect on reaction times. They exhibited longer fixations in the divided-attention condition, in which they also showed a disproportionate reduction in recognition memory for the content of the secondary task. Actual or potential applications of this research include methods for evaluating the distraction of conversations and safety implications of conversation on visual search behavior.  相似文献   

8.
Search engines retrieve and rank Web pages which are not only relevant to a query but also important or popular for the users. This popularity has been studied by analysis of the links between Web resources. Link-based page ranking models such as PageRank and HITS assign a global weight to each page regardless of its location. This popularity measurement has shown successful on general search engines. However unlike general search engines, location-based search engines should retrieve and rank higher the pages which are more popular locally. The best results for a location-based query are those which are not only relevant to the topic but also popular with or cited by local users. Current ranking models are often less effective for these queries since they are unable to estimate the local popularity. We offer a model for calculating the local popularity of Web resources using back link locations. Our model automatically assigns correct locations to the links and content and uses them to calculate new geo-rank scores for each page. The experiments show more accurate geo-ranking of search engine results when this model is used for processing location-based queries.  相似文献   

9.
The goal of this study was to investigate the effects of Chinese character size, number of characters per line, and number of menu items on visual search task performance on a tablet computer for different age group users. Forty-two participants participated the visual search experiment. Gender, age group (younger, middle-aged, and older), Chinese character size (15, 17, 19, 21, 23, and 25 point), number of characters per line (2–5 characters, 9–14 characters, and 15–18 characters), and number of menu items (5, 7, 9, 11, and 13 items) were the independent variables. The dependent variables included search time and error rate. Gender had a large effect size on search time. Female participants had a faster search time than male participants. Significant differences and large effect sizes were found in search performance for different character sizes, number of characters per line, and number of menu items. Nine to fourteen characters per line achieved the fastest search time and lowest error rate. The fastest search time was for a 19-point character size in the younger group; 21-point character size in the middle-aged group; and 23-point character size in the older group. Besides, the fastest search time was found when showing 9 items in the younger group; 7 items in the middle-aged group; and 5 items in the older group. The optimum visual search performance was achieved with 19-point character size showing 9 items and 9–14 characters per line for the younger group; 21-point character size showing 7 items and 9–14 characters per line for the middle-aged group; and 23-point character size showing 5 items and 9–14 characters per line for the older group. The results provide useful information for tablet computer interface-presentations for different age group users.  相似文献   

10.
韦莎  朱焱 《计算机应用》2016,36(3):735-739
针对因Web中存在由正常网页指向垃圾网页的链接,导致排序算法(Anti-TrustRank等)检测性能降低的问题,提出了一种主题相似度和链接权重相结合,共同调节网页非信任值传播的排序算法,即主题链接非信任排序(TLDR)。首先,运用隐含狄利克雷分配(LDA)模型得到所有网页的主题分布,并计算相互链接网页间的主题相似度;其次,根据Web图计算链接权重,并与主题相似度结合,得到主题链接权重矩阵;然后,利用主题链接权重调节非信任值传播,改进Anti-TrustRank和加权非信任值排序(WATR)算法,使网页得到更合理的非信任值;最后,将所有网页的非信任值进行排序,通过划分阈值检测出垃圾网页。在数据集WEBSPAM-UK2007上进行的实验结果表明,与Anti-TrustRank和WATR相比,TLDR的SpamFactor分别提高了45%和23.7%,F1-measure(阈值取600)分别提高了3.4个百分点和0.5个百分点, spam比例(前三个桶)分别提高了15个百分点和10个百分点。因此,主题与链接权重相结合的TLDR算法能有效提高垃圾网页检测性能。  相似文献   

11.
刘徽  黄宽娜  余建桥 《计算机工程》2012,38(11):284-286
Deep Web包含丰富的、高质量的信息资源,由于没有直接指向Deep Web页面的静态链接,目前大多搜索引擎不能发现这些页 面,只能通过填写表单提交查询获取。为此,提出一种Deep Web爬虫爬行策略。用网页分类器的分层结果指导链接信息提取器提取有前途的链接,将爬行深度限定在3层,从最靠近查询表单中提取链接,且只提取属于这3个层次的链接,从而减少爬虫爬行时间,提高爬虫的准确度,并设计聚焦爬行算法的约束条件。实验结果表明,该策略可以有效地下载Deep Web页面,提高爬行效率。  相似文献   

12.
OBJECTIVE: The purpose of this study was to investigate the relationship between strategy use and search success on the World Wide Web (i.e., the Web) for experienced Web users. An additional goal was to extend understanding of how the age of the searcher may influence strategy use. BACKGROUND: Current investigations of information search and retrieval on the Web have provided an incomplete picture of Web strategy use because participants have not been given the opportunity to demonstrate their knowledge of Web strategies while also searching for information on the Web. METHODS: Using both behavioral and knowledge-engineering methods, we investigated searching behavior and system knowledge for 16 younger adults (M = 20.88 years of age) and 16 older adults (M = 67.88 years). RESULTS: Older adults were less successful than younger adults in finding correct answers to the search tasks. Knowledge engineering revealed that the age-related effect resulted from ineffective search strategies and amount of Web experience rather than age per se. Our analysis led to the development of a decision-action diagram representing search behavior for both age groups. CONCLUSION: Older adults had more difficulty than younger adults when searching for information on the Web. However, this difficulty was related to the selection of inefficient search strategies, which may have been attributable to a lack of knowledge about available Web search strategies. APPLICATION: Actual or potential applications of this research include training Web users to search more effectively and suggestions to improve the design of search engines.  相似文献   

13.
Web结构挖掘中基于熵的链接分析法   总被引:1,自引:0,他引:1  
王勇  杨华千  李建福 《计算机工程与设计》2006,27(9):1622-1624,1688
在Web结构挖掘中,传统的HITS(hyperlinkinducedtopics search)算法被广泛应用来寻找搜索引擎返回页面中的Auto-rity页面和Hub页面.但是在网站中除了有价值的页面内容外,还有很多与页面内容无关的链接,如广告、链接导航等.由于这些链接的存在,应用HITS算法时就会导致某些广告网页或无关网页获得较高的Authority值和Hub值.为了解决这个问题,在原有HITS算法的基础上,引入了香农信息熵的概念,提出了基于熵的网页链接分析方法来挖掘网页结构.该算法的核心思想是用信息熵来表示链接文本所隐含的知识.  相似文献   

14.
The influence of age, subject matter knowledge, working memory, reading abilities, spatial abilities, and processing speed on Web navigation was assessed in a sample of 41 participants between the ages of 19 and 83 years. Each participant navigated a stand-alone tourism Web site to find answers to 12 questions. Performance was measured using time per trial, number of pages per trial, and number of revisited pages per trial. Age did not influence the number of total pages or repeat pages visited, which were predicted by domain knowledge, working memory, and processing speed. Age was associated with slower times per trial, and the effect remained significant after controlling for working memory, processing speed, and spatial abilities. Only with the addition of subject matter knowledge and World Wide Web experience was the age effect eliminated. Actual or potential applications of this research include redesigning Web sites to minimize memory demands and enhance visual segmentation. The data also suggest that age differences in Web navigation can be offset partially by taking advantage of older adults' prior experiences in the domain.  相似文献   

15.
Mapping the semantics of Web text and links   总被引:1,自引:0,他引:1  
Search engines use content and links to search, rank, cluster, and classify Web pages. These information discovery applications use similarity measures derived from this data to estimate relatedness between pages. However, little research exists on the relationships between similarity measures or between such measures and semantic similarity. The author analyzes and visualizes similarity relationships in massive Web data sets to identify how to integrate content and link analysis for approximating relevance. He uses human-generated metadata from Web directories to estimate semantic similarity and semantic maps to visualize relationships between content and link cues and what these cues suggest about page meaning. Highly heterogeneous topical maps point to a critical dependence on search context.  相似文献   

16.
互联网网页所形成的主题孤岛严重影响了搜索引擎系统的主题爬虫性能,通过人工增加大量的初始种子链接来发现新主题的方法无法保证主题网页的全面性.在分析传统基于内容分析、基于链接分析和基于语境图的主题爬行策略的基础上,提出了一种基于动态隧道技术的主题爬虫爬行策略.该策略结合页面主题相关度计算和URL链接相关度预测的方法确定主题孤岛之间的网页页面主题相关性,并构建层次化的主题判断模型来解决主题孤岛之间的弱链接问题.同时,该策略能有效防止主题爬虫因采集过多的主题无关页面而导致的主题漂移现象,从而可以实现在保持主题语义信息的爬行方向上的动态隧道控制.实验过程利用主题网页层次结构检测页面主题相关性并抽取“体育”主题关键词,然后以此对采集的主题网页进行索引查询测试.结果表明,基于动态隧道技术的爬行策略能够较好的解决主题孤岛问题,明显提升了“体育”主题搜索引擎的准确率和召回率.  相似文献   

17.
Most Web pages contain location information, which are usually neglected by traditional search engines. Queries combining location and textual terms are called as spatial textual Web queries. Based on the fact that traditional search engines pay little attention in the location information in Web pages, in this paper we study a framework to utilize location information for Web search. The proposed framework consists of an offline stage to extract focused locations for crawled Web pages, as well as an online ranking stage to perform location-aware ranking for search results. The focused locations of a Web page refer to the most appropriate locations associated with the Web page. In the offline stage, we extract the focused locations and keywords from Web pages and map each keyword with specific focused locations, which forms a set of <keyword, location> pairs. In the second online query processing stage, we extract keywords from the query, and computer the ranking scores based on location relevance and the location-constrained scores for each querying keyword. The experiments on various real datasets crawled from nj.gov, BBC and New York Time show that the performance of our algorithm on focused location extraction is superior to previous methods and the proposed ranking algorithm has the best performance w.r.t different spatial textual queries.  相似文献   

18.
Web crawlers are essential to many Web applications, such as Web search engines, Web archives, and Web directories, which maintain Web pages in their local repositories. In this paper, we study the problem of crawl scheduling that biases crawl ordering toward important pages. We propose a set of crawling algorithms for effective and efficient crawl ordering by prioritizing important pages with the well-known PageRank as the importance metric. In order to score URLs, the proposed algorithms utilize various features, including partial link structure, inter-host links, page titles, and topic relevance. We conduct a large-scale experiment using publicly available data sets to examine the effect of each feature on crawl ordering and evaluate the performance of many algorithms. The experimental results verify the efficacy of our schemes. In particular, compared with the representative RankMass crawler, the FPR-title-host algorithm reduces computational overhead by a factor as great as three in running time while improving effectiveness by 5?% in cumulative PageRank.  相似文献   

19.
Visual layout has a strong impact on performance and is a critical factor in the design of graphical user interfaces (GUIs) and Web pages. Many design guidelines employed in Web page design were inherited from human performance literature and GUI design studies and practices. However, few studies have investigated the more specific patterns of performance with Web pages that may reflect some differences between Web page and GUI design. We investigated interactions among four visual layout factors in Web page design (quantity of links, alignment, grouping indications, and density) in two experiments: one with pages in Hebrew, entailing right-to-left reading, and the other with English pages, entailing left-to-right reading. Some performance patterns (measured by search times and eye movements) were similar between languages. Performance was particularly poor in pages with many links and variable densities, but it improved with the presence of uniform density. Alignment was not shown to be a performance-enhancing factor. The findings are discussed in terms of the similarities and differences in the impact of layout factors between GUIs and Web pages. Actual or potential applications of this research include specific guidelines for Web page design.  相似文献   

20.
Strategies in searching a link from a web page can rely either on expectations of prototypical locations or on memories of earlier visits to the page. What is the nature of these expectations, how are locations of web objects remembered, and how do the expectations and memories control search? These questions were investigated in an experiment where, in the experimental group, nine experienced users searched links. To obtain information about expectations, users' eye movements were recorded. Memory for locations of web objects was tested immediately afterwards. In the control group, nine matched users had to guess the locations of web objects without seeing the page. Eye-movement data and control group's guesses both indicated a robust expectation of links residing on the left side of the page. Only the location of task-relevant web objects could be recollected, indicating that deep processing is required for memories to become consciously accessible. A comparison between the experimental group and the control group revealed that what was represented in memory was not an individual link's location but the approximate locations of link panels. We argue that practice-related decreases in reaction time were caused by semantic priming. Roles for the different types of memory in link search are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号