首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   150535篇
  免费   7165篇
  国内免费   3986篇
电工技术   5183篇
技术理论   2篇
综合类   8411篇
化学工业   22046篇
金属工艺   8482篇
机械仪表   6839篇
建筑科学   7198篇
矿业工程   2285篇
能源动力   2655篇
轻工业   11953篇
水利工程   2667篇
石油天然气   2984篇
武器工业   656篇
无线电   16346篇
一般工业技术   22856篇
冶金工业   5652篇
原子能技术   942篇
自动化技术   34529篇
  2024年   341篇
  2023年   1105篇
  2022年   2221篇
  2021年   2855篇
  2020年   2181篇
  2019年   1771篇
  2018年   16081篇
  2017年   15320篇
  2016年   11789篇
  2015年   3355篇
  2014年   3858篇
  2013年   4358篇
  2012年   7907篇
  2011年   14602篇
  2010年   13061篇
  2009年   10078篇
  2008年   11428篇
  2007年   12089篇
  2006年   4190篇
  2005年   4507篇
  2004年   3529篇
  2003年   3128篇
  2002年   2545篇
  2001年   1786篇
  2000年   1534篇
  1999年   1087篇
  1998年   760篇
  1997年   577篇
  1996年   599篇
  1995年   443篇
  1994年   329篇
  1993年   268篇
  1992年   215篇
  1991年   172篇
  1990年   118篇
  1989年   127篇
  1988年   112篇
  1987年   88篇
  1986年   63篇
  1985年   55篇
  1984年   67篇
  1983年   52篇
  1980年   42篇
  1968年   44篇
  1966年   42篇
  1965年   45篇
  1959年   41篇
  1958年   37篇
  1955年   64篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
891.
The Individual Haplotyping MFR problem is a computational problem that, given a set of DNA sequence fragment data of an individual, induces the corresponding haplotypes by dropping the minimum number of fragments. Bafna, Istrail, Lancia, and Rizzi proposed an algorithm of time O(22k m 2 n+23k m 3) for the problem, where m is the number of fragments, n is the number of SNP sites, and k is the maximum number of holes in a fragment. When there are mate-pairs in the input data, the parameter k can be as large as 100, which would make the Bafna-Istrail-Lancia-Rizzi algorithm impracticable. The current paper introduces a new algorithm PM-MFR of running time , where k 1 is the maximum number of SNP sites that a fragment covers (k 1 is smaller than n), and k 2 is the maximum number of fragments that cover a SNP site (k 2 is usually about 10). Since the time complexity of the algorithm PM-MFR is not directly related to the parameter k, the algorithm solves the Individual Haplotyping MFR problem with mate-pairs more efficiently and is more practical in real biological applications. This research was supported in part by the National Natural Science Foundation of China under Grant Nos. 60433020 and 60773111, the Program for New Century Excellent Talents in University No. NCET-05-0683, the Program for Changjiang Scholars and Innovative Research Team in University No. IRT0661, and the Scientific Research Fund of Hunan Provincial Education Department under Grant No. 06C526.  相似文献   
892.
In this paper we describe a general grouping technique to devise faster and simpler approximation schemes for several scheduling problems. We illustrate the technique on two different scheduling problems: scheduling on unrelated parallel machines with costs and the job shop scheduling problem. The time complexity of the resulting approximation schemes is always linear in the number n of jobs, and the multiplicative constant hidden in the O(n) running time is reasonably small and independent of the error ε. Supported by Swiss National Science Foundation project 200020-109854, “Approximation Algorithms for Machine scheduling Through Theory and Experiments II”. A preliminary version of this paper appeared in the Proceedings of ESA’01.  相似文献   
893.
A hybrid aggregation and compression technique for road network databases   总被引:1,自引:1,他引:0  
Vector data and in particular road networks are being queried, hosted and processed in many application domains such as in mobile computing. Many client systems such as PDAs would prefer to receive the query results in unrasterized format without introducing an overhead on overall system performance and result size. While several general vector data compression schemes have been studied by different communities, we propose a novel approach in vector data compression which is easily integrated within a geospatial query processing system. It uses line aggregation to reduce the number of relevant tuples and Huffman compression to achieve a multi-resolution compressed representation of a road network database. Our experiments performed on an end-to-end prototype verify that our approach exhibits fast query processing on both client and server sides as well as high compression ratio.
Cyrus ShahabiEmail:
  相似文献   
894.
Given a graph with edges colored Red and Blue, we study the problem of sampling and approximately counting the number of matchings with exactly k Red edges. We solve the problem of estimating the number of perfect matchings with exactly k Red edges for dense graphs. We study a Markov chain on the space of all matchings of a graph that favors matchings with k Red edges. We show that it is rapidly mixing using non-traditional canonical paths that can backtrack. We show that this chain can be used to sample matchings in the 2-dimensional toroidal lattice of any fixed size with k Red edges, where the horizontal edges are Red and the vertical edges are Blue. An extended abstract appeared in J.R. Correa, A. Hevia and M.A. Kiwi (eds.) Proceedings of the 7th Latin American Theoretical Informatics Symposium, LNCS 3887, pp. 190–201, Springer, 2006. N. Bhatnagar’s and D. Randall’s research was supported in part by NSF grants CCR-0515105 and DMS-0505505. V.V. Vazirani’s research was supported in part by NSF grants 0311541, 0220343 and CCR-0515186. N. Bhatnagar’s and E. Vigoda’s research was supported in part by NSF grant CCR-0455666.  相似文献   
895.
In real-world classification problems, different types of misclassification errors often have asymmetric costs, thus demanding cost-sensitive learning methods that attempt to minimize average misclassification cost rather than plain error rate. Instance weighting and post hoc threshold adjusting are two major approaches to cost-sensitive classifier learning. This paper compares the effects of these two approaches on several standard, off-the-shelf classification methods. The comparison indicates that the two approaches lead to similar results for some classification methods, such as Naïve Bayes, logistic regression, and backpropagation neural network, but very different results for other methods, such as decision tree, decision table, and decision rule learners. The findings from this research have important implications on the selection of the cost-sensitive classifier learning approach as well as on the interpretation of a recently published finding about the relative performance of Naïve Bayes and decision trees.  相似文献   
896.
We study the problem of segmenting a sequence into k pieces so that the resulting segmentation satisfies monotonicity or unimodality constraints. Unimodal functions can be used to model phenomena in which a measured variable first increases to a certain level and then decreases. We combine a well-known unimodal regression algorithm with a simple dynamic-programming approach to obtain an optimal quadratic-time algorithm for the problem of unimodal k-segmentation. In addition, we describe a more efficient greedy-merging heuristic that is experimentally shown to give solutions very close to the optimal. As a concrete application of our algorithms, we describe methods for testing if a sequence behaves unimodally or not. The methods include segmentation error comparisons, permutation testing, and a BIC-based scoring scheme. Our experimental evaluation shows that our algorithms and the proposed unimodality tests give very intuitive results, for both real-valued and binary data. Niina Haiminen received the M.Sc. degree from the University of Helsinki in 2004. She is currently a Graduate Student at the Department of Computer Science of University of Helsinki, and a Researcher at the Basic Research Unit of Helsinki Institute for Information Technology. Her research interests include algorithms, bioinformatics, and data mining. Aristides Gionis received the Ph.D. degree from Stanford University in 2003, and he is currently a Senior Researcher at the Basic Research Unit of Helsinki Institute for Information Technology. His research experience includes summer internship positions at Bell Labs, AT&T Labs, and Microsoft Research. His research areas are data mining, algorithms, and databases. Kari Laasonen received the M.Sc. degree in Theoretical Physics in 1995 from the University of Helsinki. He is currently a Graduate Student in Computer Science at the University of Helsinki and a Researcher at the Basic Research Unit of Helsinki Institute for Information Technology. His research is focused on algorithms and data analysis methods for pervasive computing.  相似文献   
897.
In this paper, we describe an implementation of use in demonstrating the effectiveness of architectures for real-time multi-agent systems. The implementation provides a simulation of a simplified RoboCup Search and Rescue environment, with unexpected events, and includes a simulator for both a real-time operating system and a CPU. We present experimental evidence to demonstrate the benefit of the implementation in the context of a particular hybrid architecture for multi-agent systems that allows certain agents to remain fully autonomous, while others are fully controlled by a coordinating agent. In addition, we discuss the value of the implementation for testing any models for the construction of real-time multi-agent systems and include a comparison to related work.
Robin CohenEmail:
  相似文献   
898.
Learning often occurs through comparing. In classification learning, in order to compare data groups, most existing methods compare either raw instances or learned classification rules against each other. This paper takes a different approach, namely conceptual equivalence, that is, groups are equivalent if their underlying concepts are equivalent while their instance spaces do not necessarily overlap and their rule sets do not necessarily present the same appearance. A new methodology of comparing is proposed that learns a representation of each group’s underlying concept and respectively cross-exams one group’s instances by the other group’s concept representation. The innovation is fivefold. First, it is able to quantify the degree of conceptual equivalence between two groups. Second, it is able to retrace the source of discrepancy at two levels: an abstract level of underlying concepts and a specific level of instances. Third, it applies to numeric data as well as categorical data. Fourth, it circumvents direct comparisons between (possibly a large number of) rules that demand substantial effort. Fifth, it reduces dependency on the accuracy of employed classification algorithms. Empirical evidence suggests that this new methodology is effective and yet simple to use in scenarios such as noise cleansing and concept-change learning.  相似文献   
899.
Zusammenfassung  Navigationssysteme, ortsbezogene Suchdienste wie Google Maps, interaktive Augmented-Reality-Spiele, Handy-basierte FindMe-Dienste – diese wenige Beispiele zeigen bereits, wie direkt oder indirekt raumbezogene Dienste unser Alltagsleben immer mehr durchdringen. Wir beobachten, wie sich die Nutzung von Geodaten schrittweise vom pers?nlichen Desktop-Geoinformationssystem (GIS) über hausinternen Zugriff auf die Geodatenbank bis zum Webzugriff durch Clients und Servern unterschiedlicher Provenienz ausweitet (,,Web-GIS“). Klassische Aufgaben wie Kartenproduktion und Beauskunftung treten dabei gegenüber komplexen, situationsbezogenen Verknüpfungen von Geo- mit anderen Daten eher in den Hintergrund. Damit geht offensichtlich ein Paradigmenwechsel von der einfachen, passiven Geodatenbereitstellung via WWW zu funktional komplexeren Geodiensten einher. Sollen solche Mehrwertdienste offen und flexibel orchestrierbar sein, so müssen Geodaten systemübergreifend ad hoc abgefragt werden k?nnen. Unabdingbar sind dafür offene Standards, nicht nur auf der Ebene von Datenaustauschformaten, sondern auch bei den Zugriffsdiensten; diese Geoservice-Standardisierung leistet das Open GeoSpatial Consortium (OGC). Im vorliegenden Beitrag geben wir einen überblick über die Modellierung von Web-basierten Geodiensten auf Basis ausgew?hlter OGC-Standards. Modellierung bedeutet in diesem Kontext die Gestaltung von Funktionalit?t und Schnittstellen eines potenziell komplexen Dienstes auf Basis offener Standards und ihrer Basisdienste und -operationen. Diskutiert werden Standards für den Meta-, Vektor- und Rasterdatenzugriff sowie ein anwendungsorientierter Standard für Sensordaten.  相似文献   
900.
Zusammenfassung Im Zeitalter der Informationsgesellschaft, so wurde postuliert, spielten r?umliche Distanzen keine Rolle mehr und unsere r?umliche Mobilit?t n?hme ab. Die Mobilit?t in den letzten Jahren, speziell die Freizeitmobilit?t, hat hingegen zugenommen. Die Bereitstellung ortsbezogener Dienste – ,,Location-based Services“ – unterstützt und f?rdert dieses Verhalten.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号