首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   83522篇
  免费   944篇
  国内免费   407篇
电工技术   770篇
综合类   2316篇
化学工业   11367篇
金属工艺   4779篇
机械仪表   3013篇
建筑科学   2153篇
矿业工程   562篇
能源动力   1101篇
轻工业   3568篇
水利工程   1266篇
石油天然气   341篇
无线电   9220篇
一般工业技术   16248篇
冶金工业   2624篇
原子能技术   253篇
自动化技术   25292篇
  2018年   14450篇
  2017年   13380篇
  2016年   9955篇
  2015年   600篇
  2014年   220篇
  2013年   182篇
  2012年   3126篇
  2011年   9389篇
  2010年   8267篇
  2009年   5531篇
  2008年   6761篇
  2007年   7765篇
  2006年   106篇
  2005年   1201篇
  2004年   1120篇
  2003年   1165篇
  2002年   532篇
  2001年   93篇
  2000年   175篇
  1999年   54篇
  1998年   52篇
  1997年   28篇
  1996年   44篇
  1995年   8篇
  1994年   12篇
  1993年   9篇
  1992年   12篇
  1991年   22篇
  1988年   9篇
  1969年   24篇
  1968年   43篇
  1967年   33篇
  1966年   42篇
  1965年   44篇
  1964年   11篇
  1963年   28篇
  1962年   22篇
  1961年   18篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
  1953年   5篇
  1952年   6篇
  1951年   4篇
  1950年   6篇
  1949年   6篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
871.
Automated debugging attempts to locate the reason for a failure. Delta debugging minimizes the difference between two inputs, where one input is processed correctly while the other input causes a failure, using a series of test runs to determine the outcome of applied changes. Delta debugging is applicable to inputs or to the program itself, as long as a correct version of the program exists. However, complex errors are often masked by other program defects, making it impossible to obtain a correct version of the program through delta debugging in such cases. Iterative delta debugging extends delta debugging and removes a series of defects step by step, until the originally unresolved defect is isolated. The method is automated and managed to localize a bug in some real-life examples.  相似文献   
872.
873.
According to the human factors paradigm for patient safety, health care work systems and innovations such as electronic medical records do not have direct effects on patient safety. Instead, their effects are contingent on how the clinical work system, whether computerized or not, shapes health care providers’ performance of cognitive work processes. An application of the human factors paradigm to interview data from two hospitals in the Midwest United States yielded numerous examples of the performance-altering effects of electronic medical records, electronic clinical documentation, and computerized provider order entry. Findings describe both improvements and decrements in the ease and quality of cognitive performance, both for interviewed clinicians and for their colleagues and patients. Changes in cognitive performance appear to have desirable and undesirable implications for patient safety as well as for quality of care and other important outcomes. Cognitive performance can also be traced to interactions between work system elements, including new technology, allowing for the discovery of problems with “fit” to be addressed through design interventions.  相似文献   
874.
Thresholding techniques for image segmentation is one of the most popular approaches in Computational Vision systems. Recently, M. Albuquerque has proposed a thresholding method (Albuquerque et al. in Pattern Recognit Lett 25:1059–1065, 2004) based on the Tsallis entropy, which is a generalization of the traditional Shannon entropy through the introduction of an entropic parameter q. However, the solution may be very dependent on the q value and the development of an automatic approach to compute a suitable value for q remains also an open problem. In this paper, we propose a generalization of the Tsallis theory in order to improve the non-extensive segmentation method. Specifically, we work out over a suitable property of Tsallis theory, named the pseudo-additive property, which states the formalism to compute the whole entropy from two probability distributions given an unique q value. Our idea is to use the original M. Albuquerque’s algorithm to compute an initial threshold and then update the q value using the ratio of the areas observed in the image histogram for the background and foreground. The proposed technique is less sensitive to the q value and overcomes the M. Albuquerque and k-means algorithms, as we will demonstrate for both ultrasound breast cancer images and synthetic data.  相似文献   
875.
Scanning laser range sensors provide range data consisting of a set of point measurements. The laser sensor URG-04LX has a distance range of approximately 0.02–4 m and a scanning angle range of 240°. Usually, such an image range is acquired from one viewpoint by “moving” the laser beam using rotating mirrors/prisms. The orientation of the laser beam can easily be measured and converted into the coordinates of the image. This article conducts localization using virtual labels with data about distances in the environment obtained from 2D distance laser sensors. This method puts virtual labels on special features and points which are along the mobile robot’s path. The current location is calculated by combining the virtual label and the range image of the laser range finder.  相似文献   
876.
A New Approach for Multi-Document Update Summarization   总被引:1,自引:1,他引:0       下载免费PDF全文
1958(2)
  • Manifold-ranking based topic-focused multi-document summarization 2007
  • An Introduction to Kolmogorov Complexity and Its Applications 1997
  • The use of MMR,diversity-based reranking for reordering documents and producing summaries 1998
  • Centroid-based summarization of multiple documents 2004(6)
  • A trainable document summarizer 1995
  • Impact of linguistic analysis on the semantic graph coverage and learning of document extracts 2005
  • Document summarization using conditional random fields 2007
  • Adasum:An adaptive model for summarization 2008
  • Lexpagerank:Prestige in multidocument text summarization 2004
  • Mihalcea R.Taran P Textrank-Bring order into texts 2004
  • Mihalcea R.Tarau P A language independent algorithm for single and multiple document summarization 2005
  • Wan X.Yang J.Xiao J Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction 2007
  • Wan X An exploration of document impact on graph-based multi-document summarization 2008
  • Bennett C H.Gács P.Li M.Vitányi P M,Zurek W H Information distance 1998(4)
  • Li M.Badger J H.Chen X.Kwong S,Kearney P,Zhang H An information-based sequence distance and its application to whole mitochondrial genome phylogeny 2001(2)
  • Li M.Chen X.Li X.Ma B Vitányi P M The similarity metric 2004(12)
  • Long C.Zhu X.Li M.Ma B Information shared by many objects 2008
  • Benedetto D.Caglioti E.Loreto V Language trees and zipping 2002(4)
  • Bennett C H.Li M.Ma B Chain letters and evolutionary histories 2003(6)
  • Cilibrasi R L.Vitányi P M The Google similarity distance 2007(3)
  • Zhang X.Hao Y.Zhu X.Li M Information distance from a question to an answer 2007
  • Ziv J.Lempel A A universal algorithm for sequential data compression 1977(3)
  • Lin C Y.Hovy E Automatic evaluation of summaries using n-gram co-occurrence statistics 2003
  • Nenkova A.Passonneau R.Mckeown K The pyramid method:Incorporating human content selection variation in summarization evaluation 2007(2)
  • >>更多...  相似文献   

    877.
    Large-scale object-oriented (OO) software systems have recently been found to share global network characteristics such as small world and scale free, which go beyond the scope of traditional software measurement and assessment methodologies. To measure the complexity at various levels of granularity, namely graph, class (and object) and source code, we propose a hierarchical set of metrics in terms of coupling and cohesion — the most important characteristics of software, and analyze a sample of 12 open-source OO software systems to empirically validate the set. Experimental results of the correlations between cross-level metrics indicate that the graph measures of our set complement traditional software metrics well from the viewpoint of network thinking, and provide more effective information about fault-prone classes in practice.  相似文献   
    878.
    New generation sequencing systems are changing how molecular biology is practiced. The widely promoted $1000 genome will be a reality with attendant changes for healthcare, including personalized medicine. More broadly the genomes of many new organisms with large samplings from populations will be commonplace. What is less appreciated is the explosive demands on computation, both for CPU cycles and storage as well as the need for new computational methods. In this article we will survey some of these develo...  相似文献   
    879.
    We present and analyze an unsupervised method for Word Sense Disambiguation (WSD). Our work is based on the method presented by McCarthy et al. in 2004 for finding the predominant sense of each word in the entire corpus. Their maximization algorithm allows weighted terms (similar words) from a distributional thesaurus to accumulate a score for each ambiguous word sense, i.e., the sense with the highest score is chosen based on votes from a weighted list of terms related to the ambiguous word. This list is obtained using the distributional similarity method proposed by Lin Dekang to obtain a thesaurus. In the method of McCarthy et al., every occurrence of the ambiguous word uses the same thesaurus, regardless of the context where the ambiguous word occurs. Our method accounts for the context of a word when determining the sense of an ambiguous word by building the list of distributed similar words based on the syntactic context of the ambiguous word. We obtain a top precision of 77.54% of accuracy versus 67.10% of the original method tested on SemCor. We also analyze the effect of the number of weighted terms in the tasks of finding the Most Frecuent Sense (MFS) and WSD, and experiment with several corpora for building the Word Space Model.  相似文献   
    880.
    Attribute-Based Signature with Policy-and-Endorsement Mechanism   总被引:1,自引:1,他引:0       下载免费PDF全文
    In this paper a new signature scheme, called Policy-Endorsing Attribute-Based Signature, is developed to correspond with the existing Ciphertext-Policy Attribute-Based Encryption. This signature provides a policy-and-endorsement mechanism. In this mechanism a single user, whose attributes satisfy the predicate, endorses the message. This signature allows the signer to announce his endorsement using an access policy without having to reveal the identity of the signer. The security of this signature, selfless anonymity and existential unforgeability, is based on the Strong Diffie-Hellman assumption and the Decision Linear assumption in bilinear map groups.  相似文献   
    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号