首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   84304篇
  免费   1004篇
  国内免费   411篇
电工技术   796篇
综合类   2317篇
化学工业   11616篇
金属工艺   4795篇
机械仪表   3044篇
建筑科学   2197篇
矿业工程   565篇
能源动力   1157篇
轻工业   3613篇
水利工程   1284篇
石油天然气   355篇
无线电   9279篇
一般工业技术   16379篇
冶金工业   2639篇
原子能技术   257篇
自动化技术   25426篇
  2023年   16篇
  2022年   29篇
  2021年   74篇
  2020年   71篇
  2019年   82篇
  2018年   14520篇
  2017年   13436篇
  2016年   10004篇
  2015年   635篇
  2014年   271篇
  2013年   279篇
  2012年   3170篇
  2011年   9444篇
  2010年   8294篇
  2009年   5559篇
  2008年   6766篇
  2007年   7786篇
  2006年   121篇
  2005年   1211篇
  2004年   1122篇
  2003年   1168篇
  2002年   534篇
  2001年   95篇
  2000年   176篇
  1999年   55篇
  1998年   51篇
  1997年   27篇
  1996年   43篇
  1995年   10篇
  1994年   11篇
  1993年   9篇
  1992年   13篇
  1991年   26篇
  1988年   9篇
  1969年   24篇
  1968年   43篇
  1967年   33篇
  1966年   42篇
  1965年   44篇
  1964年   11篇
  1963年   28篇
  1962年   22篇
  1961年   18篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
971.
Thresholding techniques for image segmentation is one of the most popular approaches in Computational Vision systems. Recently, M. Albuquerque has proposed a thresholding method (Albuquerque et al. in Pattern Recognit Lett 25:1059–1065, 2004) based on the Tsallis entropy, which is a generalization of the traditional Shannon entropy through the introduction of an entropic parameter q. However, the solution may be very dependent on the q value and the development of an automatic approach to compute a suitable value for q remains also an open problem. In this paper, we propose a generalization of the Tsallis theory in order to improve the non-extensive segmentation method. Specifically, we work out over a suitable property of Tsallis theory, named the pseudo-additive property, which states the formalism to compute the whole entropy from two probability distributions given an unique q value. Our idea is to use the original M. Albuquerque’s algorithm to compute an initial threshold and then update the q value using the ratio of the areas observed in the image histogram for the background and foreground. The proposed technique is less sensitive to the q value and overcomes the M. Albuquerque and k-means algorithms, as we will demonstrate for both ultrasound breast cancer images and synthetic data.  相似文献   
972.
Scanning laser range sensors provide range data consisting of a set of point measurements. The laser sensor URG-04LX has a distance range of approximately 0.02–4 m and a scanning angle range of 240°. Usually, such an image range is acquired from one viewpoint by “moving” the laser beam using rotating mirrors/prisms. The orientation of the laser beam can easily be measured and converted into the coordinates of the image. This article conducts localization using virtual labels with data about distances in the environment obtained from 2D distance laser sensors. This method puts virtual labels on special features and points which are along the mobile robot’s path. The current location is calculated by combining the virtual label and the range image of the laser range finder.  相似文献   
973.
A New Approach for Multi-Document Update Summarization   总被引:1,自引:1,他引:0       下载免费PDF全文
1958(2)
  • Manifold-ranking based topic-focused multi-document summarization 2007
  • An Introduction to Kolmogorov Complexity and Its Applications 1997
  • The use of MMR,diversity-based reranking for reordering documents and producing summaries 1998
  • Centroid-based summarization of multiple documents 2004(6)
  • A trainable document summarizer 1995
  • Impact of linguistic analysis on the semantic graph coverage and learning of document extracts 2005
  • Document summarization using conditional random fields 2007
  • Adasum:An adaptive model for summarization 2008
  • Lexpagerank:Prestige in multidocument text summarization 2004
  • Mihalcea R.Taran P Textrank-Bring order into texts 2004
  • Mihalcea R.Tarau P A language independent algorithm for single and multiple document summarization 2005
  • Wan X.Yang J.Xiao J Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction 2007
  • Wan X An exploration of document impact on graph-based multi-document summarization 2008
  • Bennett C H.Gács P.Li M.Vitányi P M,Zurek W H Information distance 1998(4)
  • Li M.Badger J H.Chen X.Kwong S,Kearney P,Zhang H An information-based sequence distance and its application to whole mitochondrial genome phylogeny 2001(2)
  • Li M.Chen X.Li X.Ma B Vitányi P M The similarity metric 2004(12)
  • Long C.Zhu X.Li M.Ma B Information shared by many objects 2008
  • Benedetto D.Caglioti E.Loreto V Language trees and zipping 2002(4)
  • Bennett C H.Li M.Ma B Chain letters and evolutionary histories 2003(6)
  • Cilibrasi R L.Vitányi P M The Google similarity distance 2007(3)
  • Zhang X.Hao Y.Zhu X.Li M Information distance from a question to an answer 2007
  • Ziv J.Lempel A A universal algorithm for sequential data compression 1977(3)
  • Lin C Y.Hovy E Automatic evaluation of summaries using n-gram co-occurrence statistics 2003
  • Nenkova A.Passonneau R.Mckeown K The pyramid method:Incorporating human content selection variation in summarization evaluation 2007(2)
  • >>更多...  相似文献   

    974.
    Large-scale object-oriented (OO) software systems have recently been found to share global network characteristics such as small world and scale free, which go beyond the scope of traditional software measurement and assessment methodologies. To measure the complexity at various levels of granularity, namely graph, class (and object) and source code, we propose a hierarchical set of metrics in terms of coupling and cohesion — the most important characteristics of software, and analyze a sample of 12 open-source OO software systems to empirically validate the set. Experimental results of the correlations between cross-level metrics indicate that the graph measures of our set complement traditional software metrics well from the viewpoint of network thinking, and provide more effective information about fault-prone classes in practice.  相似文献   
    975.
    New generation sequencing systems are changing how molecular biology is practiced. The widely promoted $1000 genome will be a reality with attendant changes for healthcare, including personalized medicine. More broadly the genomes of many new organisms with large samplings from populations will be commonplace. What is less appreciated is the explosive demands on computation, both for CPU cycles and storage as well as the need for new computational methods. In this article we will survey some of these develo...  相似文献   
    976.
    We present and analyze an unsupervised method for Word Sense Disambiguation (WSD). Our work is based on the method presented by McCarthy et al. in 2004 for finding the predominant sense of each word in the entire corpus. Their maximization algorithm allows weighted terms (similar words) from a distributional thesaurus to accumulate a score for each ambiguous word sense, i.e., the sense with the highest score is chosen based on votes from a weighted list of terms related to the ambiguous word. This list is obtained using the distributional similarity method proposed by Lin Dekang to obtain a thesaurus. In the method of McCarthy et al., every occurrence of the ambiguous word uses the same thesaurus, regardless of the context where the ambiguous word occurs. Our method accounts for the context of a word when determining the sense of an ambiguous word by building the list of distributed similar words based on the syntactic context of the ambiguous word. We obtain a top precision of 77.54% of accuracy versus 67.10% of the original method tested on SemCor. We also analyze the effect of the number of weighted terms in the tasks of finding the Most Frecuent Sense (MFS) and WSD, and experiment with several corpora for building the Word Space Model.  相似文献   
    977.
    Attribute-Based Signature with Policy-and-Endorsement Mechanism   总被引:1,自引:1,他引:0       下载免费PDF全文
    In this paper a new signature scheme, called Policy-Endorsing Attribute-Based Signature, is developed to correspond with the existing Ciphertext-Policy Attribute-Based Encryption. This signature provides a policy-and-endorsement mechanism. In this mechanism a single user, whose attributes satisfy the predicate, endorses the message. This signature allows the signer to announce his endorsement using an access policy without having to reveal the identity of the signer. The security of this signature, selfless anonymity and existential unforgeability, is based on the Strong Diffie-Hellman assumption and the Decision Linear assumption in bilinear map groups.  相似文献   
    978.
    Tests and Proofs     
    This special issue collects current advances in the ongoing attempt to obtain synergies from the combination of Tests and Proofs.  相似文献   
    979.
    In interactive theorem proving practice a significant amount of time is spent on unsuccessful proof attempts of wrong conjectures. An automatic method that reveals them by generating finite counter examples would offer an extremely valuable support for a proof engineer by saving his time and effort. In practice, such counter examples tend to be small, so usually there is no need to search for big instances. Most definitions of functions or predicates on infinite structures do not preserve the semantics if a transition to arbitrary finite substructures is made. We propose constraints which guarantee a correct axiomatization on finite structures and present an approach which uses the Alloy Analyzer to generate finite instances of theories in the theorem prover KIV. It is evaluated on the library of basic data types as well as on some challenging case studies in KIV. The technique is implemented using the Kodkod constraint solver which is a successor of Alloy.  相似文献   
    980.
    Combinatorial testing is as an effective testing technique to reveal failures in a given system, based on input combinations coverage and combinatorial optimization. Combinatorial testing of strength t (t ≥ 2) requires that each t-wise tuple of values of the different system input parameters is covered by at least one test case. Combinatorial test suite generation algorithms aim at producing a test suite covering all the required tuples in a small (possibly minimal) number of test cases, in order to reduce the cost of testing. The most used combinatorial technique is the pairwise testing (t = 2) which requires coverage of all pairs of input values. Constrained combinatorial testing takes also into account constraints over the system parameters, for instance forbidden tuples of inputs, modeling invalid or not realizable input values combinations. In this paper a new approach to combinatorial testing, tightly integrated with formal logic, is presented. In this approach, test predicates are used to formalize combinatorial testing as a logical problem, and an external formal logic tool is applied to solve it. Constraints over the input domain are expressed as logical predicates too, and effectively handled by the same tool. Moreover, inclusion or exclusion of select tuples is supported, allowing the user to customize the test suite layout. The proposed approach is supported by a prototype tool implementation and results of experimental assessment are also presented.  相似文献   
    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号