首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
8月26日晴“香港悠游中文搜索引擎支持GB,BIG5内码,是一个高效的搜索引擎。”我一边啃着中午食堂发的批人一边念着手中的桑忐,“试试看,看它和其他的网络搜索站点有什么不同。”我说青从软椅上坐在,刚要开机,忽然看见小何还闷着头坐电脑前,厂是我问道:“小河,干什么呢?大中午的,不休息一会儿?"小何推广推眼镜:“唉,还不是为小李出国的事儿!”“小李前天就走了,是不是忘了叫他带点好东回回来?嘻嘻。”说着我冲小何一乐。“才不是呢,小车人是走了,。才是他的工作还要有人来做他走之前,尽顾着一块地喝酒厂,也没有问问他的…  相似文献   

2.
8月26日 睛 “香港悠游中文搜索引擎支持GB、BIG5内码,是一个高效的搜索引擎。”我一边啃着中午食堂发的桃子,一边念着手中的杂志,“试试看,看它和其他的网络搜索站点有什么不同。”我说着从软椅上坐直,刚要开机,忽然看见小何还闷着头坐在电脑前,于是我问道:“小何,干什么呢?大中午的,不休息一会儿?”小何推了推眼镜:“唉,还不是为小李出国的事儿!”“小  相似文献   

3.
Web search users complain of the inaccurate results produced by current search engines. Most of these inaccurate results are due to a failure to understand the user??s search goal. This paper proposes a method to extract users?? intentions and to build an intention map representing these extracted intentions. The proposed method makes intention vectors from clicked pages from previous search logs obtained on a given query. The components of the intention vector are weights of the keywords in a document. It extracts user??s intentions by using clustering the intention vectors and extracting intention keywords from each cluster. The extracted the intentions on a query are represented in an intention map. For the efficiency analysis of intention map, we extracted user??s intentions using 2,600 search log data a current domestic commercial search engine. The experimental results with a search engine using the intention maps show statistically significant improvements in user satisfaction scores.  相似文献   

4.
While bricks-and-mortar-only retailers do not offer online purchasing, they often take advantage of multi-channel management strategies to reach consumers in a pre-purchase phase. We investigate whether paid search can increase the sales of brick-and-mortar retailers who promote their offers via an informational website. Although a sizeable one third of all retailers still trade without an online-shop, previous work has been silent about the effects of paid search for them. We make use of a randomized field experiment and an end-to-end tracking mechanism to investigate the cross-channel behavior of individual consumers. Our empirical results suggest that, whilst paid search increases the number of potential customers through enhancing the reach of marketing initiatives, store sales are not increased. We conclude that customers who search online to buy offline primarily use paid search as a navigational shortcut to the retailer’s website. Consequently, bricks-and-mortar-only retailers seeking to increase store purchases should approach paid search with caution.  相似文献   

5.
6.
《国际计算机数学杂志》2012,89(3-4):237-246
We study the optimal encoding of search trees in lists and, in particular, in single-linked lists. We are able to provide an 0(n2) construction algorithm for weighted binary search trees, which can be generalized to give an 0(n2 log m) algorithm for weighted m-ary search trees.  相似文献   

7.
8.
This paper questions the conclusion that menu search is random, not systematic. Three sources of evidence—search times per target as a function of target position, eye movement patterns during search, and the cumulative probability of locating a target as a function of time—cited in support of random search (Card, 1982, 1983) are re-examined and shown to be consistent with systematic, sequential search.  相似文献   

9.
Quantum computation, in particular Grover’s algorithm, has aroused a great deal of interest since it allows for a quadratic speed-up to be obtained in search procedures. Classical search procedures for an N element database require at most O(N) time complexity. Grover’s algorithm is able to find a solution with high probability in ){O(\sqrt{N})} time through an amplitude amplification scheme. In this work we draw elements from both classical and quantum computation to develop an alternative search proposal based on quantum entanglement detection schemes. In 2002, Horodecki and Ekert proposed an efficient method for direct detection of quantum entanglement. Our proposition to quantum search combines quantum entanglement detection alongside entanglement inducing operators. The quantum search algorithm relies on measuring a quantum superposition after having applied a unitary evolution. We deviate from the standard method by focusing on fine-tuning a unitary operator in order to infer the solution with certainty. Our proposal sacrifices space for speed and depends on the mathematical properties of linear positive maps Λ which have not been operationally characterized. Whether such a Λ can be easily determined remains an open question.  相似文献   

10.

There are limited studies that are addressing the challenges of visually impaired (VI) users when viewing search results on a search engine interface by using a screen reader. This study investigates the effect of providing an overview of search results to VI users. We present a novel interactive search engine interface called InteractSE to support VI users during the results exploration stage in order to improve their interactive experience and web search efficiency. An overview of the search results is generated using an unsupervised machine learning approach to present the discovered concepts via a formal concept analysis that is domain-independent. These concepts are arranged in a multi-level tree following a hierarchical order and covering all retrieved documents that share maximal features. The InteractSE interface was evaluated by 16 legally blind users and compared with the Google search engine interface for complex search tasks. The evaluation results were obtained based on both quantitative (as task completion time) and qualitative (as participants’ feedback) measures. These results are promising and indicate that InteractSE enhances the search efficiency and consequently advances user experience. Our observations and analysis of the user interactions and feedback yielded design suggestions to support VI users when exploring and interacting with search results.

  相似文献   

11.
Developers commonly make use of a web search engine such as Google to locate online resources to improve their productivity. A better understanding of what developers search for could help us understand their behaviors and the problems that they meet during the software development process. Unfortunately, we have a limited understanding of what developers frequently search for and of the search tasks that they often find challenging. To address this gap, we collected search queries from 60 developers, surveyed 235 software engineers from more than 21 countries across five continents. In particular, we asked our survey participants to rate the frequency and difficulty of 34 search tasks which are grouped along the following seven dimensions: general search, debugging and bug fixing, programming, third party code reuse, tools, database, and testing. We find that searching for explanations for unknown terminologies, explanations for exceptions/error messages (e.g., HTTP 404), reusable code snippets, solutions to common programming bugs, and suitable third-party libraries/services are the most frequent search tasks that developers perform, while searching for solutions to performance bugs, solutions to multi-threading bugs, public datasets to test newly developed algorithms or systems, reusable code snippets, best industrial practices, database optimization solutions, solutions to security bugs, and solutions to software configuration bugs are the most difficult search tasks that developers consider. Our study sheds light as to why practitioners often perform some of these tasks and why they find some of them to be challenging. We also discuss the implications of our findings to future research in several research areas, e.g., code search engines, domain-specific search engines, and automated generation and refinement of search queries.  相似文献   

12.
1 Introduction Automatically generating a limited number of high-quality inputs to reveal bugs,crashes,and hangs becomes a central problem in software testing research[1].The two most extensively studied approaches to automated test input generation are fuzzing[2]and dynamic symbolic execution[3,4](DSE).Both fuzzing and DSE can effectively generate structural test inputs for non-trivial programs without human aids,and are extensively studied in existing literatures[3-5].However,they are also considerably different.To further understand in what sense they are similar or different,and to understand the strengths and limitations of both techniques,we raise the following two questions:(Q1)How do existing fuzzing and DSE techniques model the input space and manage the search procedure?  相似文献   

13.
Keyword search enables inexperienced users to easily search XML database with no specific knowledge of complex structured query languages and XML data schemas. Existing work has addressed the problem of selecting data nodes that match keywords and connecting them in a meaningful way, e.g., SLCA and ELCA. However, it is time-consuming and unnecessary to serve all the connected subtrees to the users because in general the users are only interested in part of the relevant results. In this paper, we propose a new keyword search approach which basically utilizes the statistics of underlying XML data to decide the promising result types and then quickly retrieves the corresponding results with the help of selected promising result types. To guarantee the quality of the selected promising result types, we measure the correlations between result types and a keyword query by analyzing the distribution of relevant keywords and their structures within the XML data to be searched. In addition, relevant result types can be efficiently computed without keyword query evaluation and any schema information. To directly return top-k keyword search results that conform to the suggested promising result types, we design two new algorithms to adapt to the structural sensitivity of the keyword nodes over the keyword search results. Lastly, we implement all proposed approaches and present the relevant experimental results to show the effectiveness of our approach.  相似文献   

14.
Klein (Journal of Business Research 41(3): 195–203, 1998) posited that the Web can transform experience goods into search goods (ES shifts). We examine her proposition in three ways. First, we critically assess the background of her proposition in light of the Web evolution in the past decade. Second, we conduct a comparison of past studies that measured the extent of search, experience, and credence (SEC) characteristics of goods. Third, we report the results of an exploratory survey on a set of commonly purchased products to benchmark possible ES shifts against the past studies. Their results indicate that SEC classification changes do not seem significant.  相似文献   

15.
Search engines play a critical role in the diffusion of online information because they determine what content is easily visible to Web users. Major search engines, such as Google, Microsoft Live Search, and Yahoo!, provide two distinct types of results, organic and paid, each of which uses different mechanisms for selecting and ranking relevant Web pages. Using a third-party trust assurance program from BBB (Better Business Bureau) Online we find that vendors represented by websites in organic and paid results have varying reliability ratings. These ratings, based on overall customer experiences, may range from satisfactory to unsatisfactory. We empirically examine how vendors' reliability ratings from BBB Online are associated with cues (such as type of search result, relative price of a product, and number of sites selling the product) that can be observed or derived from organic and paid search results. Further, we apply a data mining technique to predict the vendors' BBB reliability ratings using those cues and achieve good performance.  相似文献   

16.
This paper presents a parallel algorithm for fast word search to determine the set of biological words of an input DNA sequence. The algorithm is designed to scale well on state-of-the-art multiprocessor/multicore systems for large inputs and large maximum word sizes. The pattern exhibited by many sequential solutions to this problem is a repetitive execution over a large input DNA sequence, and the generation of large amounts of output data to store and retrieve the words determined by the algorithm. As we show, this pattern does not lend itself to straightforward standard parallelization techniques. The proposed algorithm aims to achieve three major goals to overcome the drawbacks of embarrassingly parallel solution techniques: (i) to impose a high degree of cache locality on a problem that, by nature, tends to exhibit nonlocal access patterns, (ii) to be lock free or largely reduce the need for data access locking, and (iii) to enable an even distribution of the overall processing load among multiple threads. We present an implementation and performance evaluation of the proposed algorithm on DNA sequences of various sizes for different organisms on a dual processor quad-core system with a total of 8 cores. We compare the performance of the parallel word search implementation with a sequential implementation and with an embarrassingly parallel implementation. The results show that the proposed algorithm far outperforms the embarrassingly parallel strategy and achieves a speed-up’s of up to 6.9 on our 8-core test system.  相似文献   

17.
Product quantization is now considered as an effective approach to solve the approximate nearest neighbor(ANN)search.A collection of derivative algorithms have been developed.However,the current techniques ignore the intrinsic high order structures of data,which usually contain helpful information for improving the computational precision.In this paper,aiming at the complex structure of high order data,we design an optimized technique,called optimized high order product quantization(O-HOPQ)for ANN search.In O-HOPQ,we incorporate the high order structures of the data into the process of designing a more effective subspace decomposition way.As a result,spatial adjacent elements in the high order data space are grouped into the same subspace.Then,O-HOPQ generates its spatial structured codebook,by optimizing the quantization distortion.Starting from the structured codebook,the global optimum quantizers can be obtained effectively and efficiently.Experimental results show that appropriate utilization of the potential information that exists in the complex structure of high order data will result in significant improvements to the performance of the product quantizers.Besides,the high order structure based approaches are effective to the scenario where the data have intrinsic complex structures.  相似文献   

18.
19.
The theory of learning styles states that people have different approaches to learning and studying [7,8]. Given a specific instruction method or environment, some people will learn more effectively than others due to their individual learning style and the grade distribution of the learning would be bell-shaped, with the majority of the learners appearing in the middle of the distribution curve. Several studies show that there is ‘No Significant Difference’ when technology is applied to instruction [6, 10, 12, 15, 20, 23, 25], since either in traditional classrooms or in any of the technological environments, there is only one form of instruction, and usually from one source, yielding the familiar bell shaped grade distribution. This explains the ‘No Significant Difference’ results and indicates that another instruction method needs to be investigated. An approach to achieve ‘A Significant Difference’ is to provide several different instruction methods. This paper describes Arthur, a Web-based instruction system that provides adaptive instruction to achieve ‘A Significant Difference’.  相似文献   

20.
Combining search space partition and abstraction for LTL model checking   总被引:2,自引:0,他引:2  
The state space explosion problem is still the key obstacle for applying model checking to systems of industrial size. Abstraction-based methods have been particularly successful in this regard. This paper presents an approach based on refinement of search space partition and abstraction which combines these two techniques for reducing the complexity of model checking. The refinement depends on the representation of each portion of search space. Especially, search space can be refined stepwise to get a better reduction. As reported in the case study, the integration of search space partition and abstraction improves the efficiency of verification with respect to the requirement of memory and obtains significant advantage over the use of each of them in isolation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号