首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The official abstracts published together with many patent applications are freely available for search, e.g. using patent office websites such as Espacenet or search engines. Some service providers also offer, for a price, their own enhanced abstracts of patent applications. The authors propose a way of making an objective comparison between searching using the official abstracts and using the enhanced abstracts. The advantages offered by enhanced abstracts will be explained and include simple benefits such as saving time in finding the relevant prior art, and more intricate ones such as better suitability for ranking techniques.  相似文献   

2.
Maik   《World Patent Information》2009,31(4):278-284
In depth analysis of non-patent literature prior art is a crucial step in checking patentability of new inventions and validity of competitor’s patents, since by patent law relevant subject matter disclosed in non-patent literature is as important as any patent document. E-journal articles, as well as any scientific and technical information published on the web are an important source of prior art that is very often insufficiently covered and indexed by commercial databases. This article reviews search and display capabilities of e-journal search sites of different publishers and hosts, as well as their value for full-text prior art analysis to enhance retrieval from commercial databases. Moreover, current developments and future prospects of chemical structure searching both in e-journals and on the internet are discussed.  相似文献   

3.
The European Bioinformatics Institute (EMBL-EBI) provides a free-access sequence search service that combines some of the most comprehensive, annotation-rich resources with a broad range of search and analysis tools. These resources contain patent abstracts, patent chemical compounds, patent sequences and patent equivalents extracted from the EPO, USPTO, JPO and KIPO, in addition to non-patent data. Patent protein and nucleotide sequence data are also available as annotation-enriched non-redundant datasets that clearly display priority dates. Search results are linked to a wide array of specialized databases and, for proteins, functional predictions that add in-depth annotation to help prioritize results when building a claim, without the need to search multiple databases.  相似文献   

4.
Web spam is a technique through which the irrelevant pages get higher rank than relevant pages in the search engine’s results. Spam pages are generally insufficient and inappropriate results for user. Many researchers are working in this area to detect the spam pages. However, there is no universal efficient technique developed so far which can detect all spam pages. This paper is an effort in that direction, where we propose a combined approach of content and link-based techniques to identify the spam pages. The content-based approach uses term density and Part of Speech (POS) ratio test and in the link-based approach, we explore the collaborative detection using personalized page ranking to classify the Web page as spam or non-spam. For experimental purpose, WEBSPAM-UK2006 dataset has been used. The results have been compared with some of the existing approaches. A good and promising F-measure of 75.2% demonstrates the applicability and efficiency of our approach.  相似文献   

5.
A growing number of researchers are exploring the use of citation relationships such as direct citation, bibliographic coupling, and co-citation for information retrieval in scientific databases and digital libraries. In this paper, I propose a method of ranking the relevance of citation-based search results to a set of key, or seed, papers by measuring the number of citation relationships they share with those key papers. I tested the method against 23 published systematic reviews and found that the method retrieved 87% of the studies included in these reviews. The relevance ranking approach identified a subset of the citation search results that comprised 27% of the total documents retrieved by the method, and 7% of the documents retrieved by these reviews, but that contained 75% of the studies included in these reviews. Additional testing suggested that the method may be less appropriate for reviews that combine literature in ways that are not reflected in the literature itself. These results suggest that this ranking method could be useful in a range of information retrieval contexts.  相似文献   

6.
Data outsourcing has become an important application of cloud computing. Driven by the growing security demands of data outsourcing applications, sensitive data have to be encrypted before outsourcing. Therefore, how to properly encrypt data in a way that the encrypted and remotely stored data can still be queried has become a challenging issue. Searchable encryption scheme is proposed to allow users to search over encrypted data. However, most searchable encryption schemes do not consider search result diversification, resulting in information redundancy. In this paper, a verifiable diversity ranking search scheme over encrypted outsourced data is proposed while preserving privacy in cloud computing, which also supports search results verification. The goal is that the ranked documents concerning diversification instead of reading relevant documents that only deliver redundant information. Extensive experiments on real-world dataset validate our analysis and show that our proposed solution is effective for the diversification of documents and verification.  相似文献   

7.
一个实用的供应商选择模型   总被引:1,自引:0,他引:1  
研究可获得的指标权重信息仅为指标相对重要性排序的供应商选择问题.首先,构建线性规划模型以确定供应商综合评价值的取值范围;其次,通过简单的变量变换,并根据线性规划的对偶理论得到了模型最优解的显式表达式;最后,利用区间数的排序公式,由综合评价值的区间范围得到各供应商的排序权重.实例分析表明了模型的可行性和有效性.  相似文献   

8.
In recent years, there has been a renewed interest in applying statistical ranking criteria to identify sites on a road network, which potentially present high traffic crash risks or are over-represented in certain type of crashes, for further engineering evaluation and safety improvement. This requires that good estimates of ranks of crash risks be obtained at individual intersections or road segments, or some analysis zones. The nature of this site ranking problem in roadway safety is related to two well-established statistical problems known as the small area (or domain) estimation problem and the disease mapping problem. The former arises in the context of providing estimates using sample survey data for a small geographical area or a small socio-demographic group in a large area, while the latter stems from estimating rare disease incidences for typically small geographical areas. The statistical problem is such that direct estimates of certain parameters associated with a site (or a group of sites) with adequate precision cannot be produced, due to a small available sample size, the rareness of the event of interest, and/or a small exposed population or sub-population in question. Model based approaches have offered several advantages to these estimation problems, including increased precision by "borrowing strengths" across the various sites based on available auxiliary variables, including their relative locations in space. Within the model based approach, generalized linear mixed models (GLMM) have played key roles in addressing these problems for many years. The objective of the study, on which this paper is based, was to explore some of the issues raised in recent roadway safety studies regarding ranking methodologies in light of the recent statistical development in space-time GLMM. First, general ranking approaches are reviewed, which include na?ve or raw crash-risk ranking, scan based ranking, and model based ranking. Through simulations, the limitation of using the na?ve approach in ranking is illustrated. Second, following the model based approach, the choice of decision parameters and consideration of treatability are discussed. Third, several statistical ranking criteria that have been used in biomedical, health, and other scientific studies are presented from a Bayesian perspective. Their applications in roadway safety are then demonstrated using two data sets: one for individual urban intersections and one for rural two-lane roads at the county level. As part of the demonstration, it is shown how multivariate spatial GLMM can be used to model traffic crashes of several injury severity types simultaneously and how the model can be used within a Bayesian framework to rank sites by crash cost per vehicle-mile traveled (instead of by crash frequency rate). Finally, the significant impact of spatial effects on the overall model goodness-of-fit and site ranking performances are discussed for the two data sets examined. The paper is concluded with a discussion on possible directions in which the study can be extended.  相似文献   

9.
Searching biopharmaceutical drug-related patent information is generally considered to be challenging. In particular, setting up efficient search strategies for comprehensive retrieval of high amounts of patent documents related to processes and methods of use, that achieve a reasonable level of precision, but still remain within a particular search scope. While it is generally accepted that patent information cannot be searched using standardized approaches, it is desirable to have a basic rule set for successful biopharmaceutical drug-related patent information retrieval, particularly facing a steady flow of patent expirations for prominent biologic drugs. The present human recombinant insulin case study shows an assessment of keyword, sequence and classification search strategies for establishing biopharmaceutical drug-centric patent landscapes. The search results of both crude and sophisticated keyword search strategies, as well as of a sequence search strategy, were compared in terms of the key information retrieval quality indicators; the recall and the precision. Through analyses of the relevant retrieved documents, a quality assessment of keyword choice is provided, as well as determining focused IPC and Derwent Manual classification codes and terminology from original patent and Derwent documentation abstract titles. All of which can be used for setting up more efficient search strategies and facilitated document categorization.  相似文献   

10.
M. Bonitz 《Scientometrics》1985,7(3-6):471-485
Selecting an appropriate set of scientific journals which best meets the users' needs and the dynamics of science requires usage of weight parameters by which journals can be ranked. Previous methods are based on the simple counting of relevant articles, or hits in SDI runs. The new method proposed combines hit numbers in SDI runs and journals' impact factors to a weight parameter called Selective Impact. The experimental results obtained show that ranking by Selective Impact leads to a higher quality of the conclusions to be drawn from journal rank distributions.  相似文献   

11.
In this work we will discuss how to optimize multipliers for prime-modulus linear congruential pseudo-random number generators with respect to the spectral test. The optimization efficiency has shown itself to be strongly dependent on the encoding strategy for the multipliers within the random search process. The optimal encoding technique will be demonstrated to be a representation in terms of powers of a primitive root. A sample distributed large-scale parameter search for subsequence stable LCGs will be conducted using the developed concepts.  相似文献   

12.
Often completeness of search results is an important demand. If not, the degree of recall should at least be estimated. A simple direct search example combining a chemical with an electrical aspect is executed on several patent databases, WPI, USP, CLAIMS, CA and JAPIO, using the special features of each of them. As one result consideration of paraphrases and special aspects is necessary even for straight-forward search terms with clear definitions like electrode. The other and main result is an unexpectedly high necessity for the use of additional databases: even the best database retrieves only 50–65% of the total result, and even the fourth database adds a considerable amount of relevant information to the results of three other databases.  相似文献   

13.
Composite indicators aggregate domain-specific information in one index, on the basis of which countries can be assigned a relative ranking. Recently, the road safety community got convinced of the policy supporting role of indicators in terms of benchmarking, target setting and selection of measures. However, combining the information of a set of relevant risk indicators in one index presenting the whole picture turns out to be very challenging. In particular, the rank of a country can be largely influenced by the methodological choices made during the composite indicator building process. Decisions concerning the selection of indicators, the normalisation of the indicator values, the weighting of indicators and the way of aggregating can influence the final ranking. In this research, it is shown that the road safety ranking of countries differs significantly according to the selected weighting method, the expert choice and the set of indicators. From these three input factors, the selection of the set of indicators is most influential. A well considered selection of indicators will therefore establish the largest reduction in ranking uncertainty. With a set of appropriate indicators, the proposed framework reveals the major sources of uncertainty in the creation of a composite road safety indicator.  相似文献   

14.
15.
Abstract

The D. (Degree) E. (Extent) R. (Relevancy) evaluation method is widely used to assess the damage states of existing reinforced concrete (RC) bridges in Taiwan. The present study is first to distinguish between relevancy (Ra, a = 1.5, 2) and non‐relevancy (Ra, a= 1) for the repair ranking of the existing RC bridge system to be assessed. The multiple assessment items optimization method was mainly applied to seek the optimum repair ranking. The five existing RC bridges, i.e. Chi‐jou, Lan‐yang, Dah‐jea, Dah‐duh and Dah‐an in Taiwan are chosen as practical examples. The results show that when each bridge is judged to be non‐relevant by the system, the repair ranking predicted by the D.E.R. evaluation method is correct. Nevertheless, when the bridge system has one or more one important bridge, the repair ranking predicted by the D.E.R. evaluation method is not accurate. The proposed method may be remedied the defect of the D.E.R. evaluation method to predict the repair rankings of existing RC bridge system.  相似文献   

16.
The article outlines the ways in which users of the U.K. Patent Office will be affected by the current computerisation project. Three areas are highlighted. First, the provision of the Patents Register online and the ways in which the Register information can be accessed. Second, the provision of classification information and its use for search purposes, including intersect searching with Register information. Third, the various uses to which the computer will be put in administering the Patents system, with particular emphasis on document receipting, caveats, monitoring, and publications.  相似文献   

17.
This article outlines the philosophy, design, and implementation of the Gradient, Structural, Concavity (GSC) recognition algorithm, which has been used successfully in several document reading applications. The GSC algorithm takes a quasi-multiresolution approach to feature generation; that is, several distinct feature types are applied at different scales in the image. These computed features measure the image characteristics at local, intermediate, and large scales. The local-scale features measure edge curvature in a neighborhood of a pixel, the intermediate features measure short stroke types which span several pixels, and the large features measure certain concavities which can span across the image. This philosophy, when coupled with the k-nearest neighbor classification paradigm, results in a recognizer which has both high accuracy and reliable confidence behavior. The confidences computed by this algorithm are generally high for valid class objects and low for nonclass objects. This allows it to be used in document reading algorithms which search for digit or character strings embedded in a field of objects. Applications of this paradigm to off-line digit string recognition and handwritten word recognition are discussed. Tests of the GSC classifier on large data bases of digits and characters are reported. © 1996 John Wiley & Sons, Inc.  相似文献   

18.
The study compares the coverage, ranking, impact and subject categorization of Library and Information Science journals, specifically, 79 titles based on data from Web of Science (WoS) and 128 titles from Scopus. Comparisons were made based on prestige factor scores reported in 2010 Journal Citation Reports and SCImago Journal Rank 2010 and noting the change in ranking when the differences are calculated. The rank normalized impact factor and the Library of Congress Classification System were used to compare impact rankings and subject categorization. There was high degree of similarity in rank normalized impact factor of titles in both WoS and Scopus databases. The searches found 162 journals, with 45 journals appearing in both databases. The rankings obtained for normalized impact scores confirm higher impact scores for titles covered in Scopus because of its larger coverage of titles. There was mismatch of subject categorization among 34 journal titles in both databases and 22 of the titles were not classified under Z subject headings in the Library of Congress catalogue. The results revealed the changes in journal title rankings when normalized, and the categorization of some journal titles in these databases might be incorrect.  相似文献   

19.
In this study, a fuzzy multi-item economic order quantity (EOQ) problem is solved by employing four different fuzzy ranking methods. All of the parameters of the multi-item EOQ problem are defined as triangular fuzzy numbers. Fuzzy ranking methods are used to rank the fuzzy objective values and to handle the constraints in the model. The results obtained by employing different fuzzy ranking methods are also compared.  相似文献   

20.
Material selection is a very fast growing multi-criteria decision-making (MCDM) problem involving a large number of factors influencing the selection process. Proper choice of material is a critical issue for the success and competitiveness of the manufacturing organizations in the global market. Selection of the most appropriate material for a particular engineering application is a time consuming and expensive process where several candidate materials available in the market are taken into consideration as the tentative alternatives. Although a large number of mathematical approaches is now available to evaluate, select and rank the alternative materials for a given engineering application, this paper explores the applicability and capability of two almost new MCDM methods, i.e. complex proportional assessment (COPRAS) and evaluation of mixed data (EVAMIX) methods for materials selection. These two methods are used to rank the alternative materials, for which several requirements are considered simultaneously. Two illustrative examples are cited which prove that these two MCDM methods can be effectively applied to solve the real time material selection problems. In each example, a list of all the possible choices from the best to the worst suitable materials is obtained which almost match with the rankings as derived by the past researchers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号