首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   42145篇
  免费   582篇
  国内免费   787篇
电工技术   724篇
综合类   51篇
化学工业   6382篇
金属工艺   3094篇
机械仪表   1386篇
建筑科学   863篇
矿业工程   151篇
能源动力   1432篇
轻工业   2267篇
水利工程   357篇
石油天然气   1386篇
无线电   3976篇
一般工业技术   10666篇
冶金工业   7218篇
原子能技术   788篇
自动化技术   2773篇
  2021年   340篇
  2020年   278篇
  2019年   374篇
  2018年   665篇
  2017年   658篇
  2016年   743篇
  2015年   539篇
  2014年   807篇
  2013年   2389篇
  2012年   1406篇
  2011年   1886篇
  2010年   1489篇
  2009年   1781篇
  2008年   1624篇
  2007年   1769篇
  2006年   1400篇
  2005年   1226篇
  2004年   1178篇
  2003年   1078篇
  2002年   1066篇
  2001年   1120篇
  2000年   964篇
  1999年   999篇
  1998年   2115篇
  1997年   1657篇
  1996年   1440篇
  1995年   983篇
  1994年   757篇
  1993年   774篇
  1992年   597篇
  1991年   572篇
  1990年   524篇
  1989年   485篇
  1988年   370篇
  1987年   403篇
  1986年   364篇
  1985年   381篇
  1984年   305篇
  1983年   300篇
  1982年   301篇
  1981年   296篇
  1980年   349篇
  1979年   341篇
  1978年   294篇
  1977年   415篇
  1976年   629篇
  1975年   308篇
  1974年   292篇
  1973年   311篇
  1972年   272篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
This paper presents an incremental neural network (INeN) for the segmentation of tissues in ultrasound images. The performances of the INeN and the Kohonen network are investigated for ultrasound image segmentation. The elements of the feature vectors are individually formed by using discrete Fourier transform (DFT) and discrete cosine transform (DCT). The training set formed from blocks of 4x4 pixels (regions of interest, ROIs) on five different tissues designated by an expert is used for the training of the Kohonen network. The training set of the INeN is formed from randomly selected ROIs of 4x4 pixels in the image. Performances of both 2D-DFT and 2D-DCT are comparatively examined for the segmentation of ultrasound images.  相似文献   
992.
In this article we propose a case-base maintenance methodology based on the idea of transferring knowledge between knowledge containers in a case-based reasoning (CBR) system. A machine-learning technique, fuzzy decision-tree induction, is used to transform the case knowledge to adaptation knowledge. By learning the more sophisticated fuzzy adaptation knowledge, many of the redundant cases can be removed. This approach is particularly useful when the case base consists of a large number of redundant cases and the retrieval efficiency becomes a real concern of the user. The method of maintaining a case base from scratch, as proposed in this article, consists of four steps. First, an approach to learning feature weights automatically is used to evaluate the importance of different features in a given case base. Second, clustering of cases is carried out to identify different concepts in the case base using the acquired feature-weights knowledge. Third, adaptation rules are mined for each concept using fuzzy decision trees. Fourth, a selection strategy based on the concepts of case coverage and reachability is used to select representative cases. In order to demonstrate the effectiveness of this approach as well as to examine the relationship between compactness and performance of a CBR system, experimental testing is carried out using the Traveling and the Rice Taste data sets. The results show that the testing case bases can be reduced by 36 and 39 percent, respectively, if we complement the remaining cases by the adaptation rules discovered using our approach. The overall accuracies of the two smaller case bases are 94 and 90 percent, respectively, of the originals.  相似文献   
993.
This paper describes the implementation of evolutionary techniques for information filtering and collection from the World Wide Web. We consider the problem of building intelligent agents to facilitate a person's search for information on the Web. An intelligent agent has been developed that uses a metagenetic algorithm in order to collect and recommend Web pages that will be interesting to the user. The user's feedback on the agent's recommendations drives the learning process to adapt the user's profile with his/her interests. The software agent utilizes the metagenetic algorithm to explore the search space of user interests. Experimental results are presented in order to demonstrate the suitability of the metagenetic algorithm's approach on the Web.  相似文献   
994.
This paper focuses on numerical method to solve the dynamic equilibrium of a humanoid robot during the walking cycle with the gait initiation process. It is based on a multi-chain strategy and a dynamic control/command architecture previously developed by Gorce. The strategy is based on correction of the trunk center of mass acceleration and force distribution of the forces exerced by the limbs on the trunk. This latter is performed by mean of a Linear Programming (LP) method. We study the gait initiation process when a subject, initially in quiet erect stance posture, performs a walking cycle. In this paper, we propose to adjust the method for the multiphases (from double support to single support) and multicriteria features of the studied movement. This is done by adapting some specific constraints and criteria in order to ensure the global stability of the humanoid robot along the task execution. For that, we use a Real-Time Criteria and Constraints Adaptation method. Simulation results are presented to demonstrate criteria and constraints influences on the dynamic stability.  相似文献   
995.
This paper reports on the performance evaluation of a dental handpiece in simulation of clinical finishing using a novel two-degrees-of-freedom (2DOF) in vitro apparatus. The instrumented apparatus consisted of a two-dimensional computer-controlled coordinate worktable carrying a dental handpiece, a piezoelectric force dynamometer, and a high-speed data acquisition and signal conditioning system for simulating the clinical operations and monitoring the dental finishing processes. The performance of the dental handpiece was experimentally evaluated with respect to rotational speed, torque, and specific finishing energy under the applied clinical finishing conditions. The results show that the rotational speeds of the dental handpiece decreased by increasing either the depth of cut or the feed rate at a constant clinically applied air pressure and water flowrate. They also decreased when increasing both the tangential and normal finishing forces. The specific finishing energy decreased with an increase in either depth of cut or feed rate, while the finishing torque increased as either the depth of cut or the feed rate was increased. Implications of these results were to provide guidance for proper applications of dental handpieces in clinical practice.  相似文献   
996.
We present the system for maintaining the versions of two packages: the TAUOLA of τ-lepton decay and PHOTOS for radiative corrections in decays. The following features can be chosen in an automatic or semi-automatic way: (1) format of the common block HEPEVT; (2) version of the physics input (for TAUOLA): as published, as initialized by the CLEO collaboration, as initialized by the ALEPH collaboration (it is suggested to use this version only with the help of the collaboration advice), new optional parametrization of matrix elements in 4π decay channels; (3) type of application: stand-alone, universal interface based on the information stored in the HEPEVT common block including longitudinal spin effects in the elementary Z/γτ+τ process, extended version of the standard universal interface including full spin effects in the H/Aτ+τ decay, interface for KKMC Monte Carlo, (4) random number generators; (5) compiler options. The last section of the paper contains documentation of the programs updates introduced over the last two years.

Program summary

Title of program:tauola-photos-F, release IICatalogue identifier:ADXO_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXO_v1_0Programs obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer: PC running GNU/Linux operating systemProgramming languages and tools used:CPP: standard C-language preprocessor, GNU Make builder tool, also FORTRAN compilerNo. of lines in distributed program, including test data, etc.: 194 118No. of bytes in distributed program, including test data, etc.:2 481 234Distribution format: tar.gzCatalogue identifier:ADXO_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXO_v2_0No. of lines in distributed program, including test data, etc.:308 235No. of bytes in distributed program, including test data, etc.:2 988 363Distribution format:tar.gzDoes the new version supersede the previous version:YesNature of the physical problem: The code of Monte Carlo generators often has to be tuned to the needs of large HEP Collaborations and experiments. Usually, these modifications do not introduce important changes in the algorithm, but rather modify the initialization and form of the hadronic current in τ decays. The format of the event record (HEPEVT common block) used to exchange information between building blocks of Monte Carlo systems often needs modification. Thus, there is a need to maintain various, slightly modified versions of the same code. The package presented here allows the production of ready-to-compile versions of TAUOLA [S. Jadach, Z. Wa?s, R. Decker, J.H. Kühn, Comput. Phys. Comm. 76 (1993) 361; A.E. Bondar, et al., Comput. Phys. Comm. 146 (2002) 139] and PHOTOS [E. Barberio, Z. Wa?s, Comput. Phys. Comm. 79 (1994) 291] Monte Carlo generators with appropriate demonstration programs. The new algorithm, universal interface of TAUOLA to work with the HEPEVT common block, is also documented here. Finally, minor technical improvements of TAUOLA and PHOTOS are also listed.Method of solution: The standard UNIX tool: the C-language preprocessor is used to produce a ready-to-distribute version of TAUOLA and PHOTOS code. The final FORTRAN code is produced from the library of ‘pre-code’ that is included in the package.Reasons for new version: The functionality of the version of TAUOLA and PHOTOS changed over the last two years. The changes, and their reasons, are documented in Section 9, and our new papers cited in this section.Additional comments: The updated version includes new features described in Section 9 of the paper. PHOTOS and TAUOLA were first submitted to the library as separate programs. Summary details of these previous programs are obtainable from the CPC Program Library.Typical running time: Depends on the speed of the computer used and the demonstration program chosen. Typically a few seconds.  相似文献   
997.
998.
This paper presents an ontology-based approach for the specification (using as a definition language) and reconciliation (using as a mediation tool) of contexts of Web services. Web services are independent components that can be triggered and composed for the satisfaction of user needs (e.g., hotel booking). Because Web services originate from different providers, their composition faces the obstacle of the context heterogeneity featuring these Web services. An unawareness of this context heterogeneity during Web services composition and execution results in a lack of the quality and relevancy of information that permits tracking the composition, monitoring the execution, and handling exceptions.  相似文献   
999.
We consider the problem of efficiently sampling Web search engine query results. In turn, using a small random sample instead of the full set of results leads to efficient approximate algorithms for several applications, such as:
•  Determining the set of categories in a given taxonomy spanned by the search results;
•  Finding the range of metadata values associated with the result set in order to enable “multi-faceted search”;
•  Estimating the size of the result set;
•  Data mining associations to the query terms.
We present and analyze efficient algorithms for obtaining uniform random samples applicable to any search engine that is based on posting lists and document-at-a-time evaluation. (To our knowledge, all popular Web search engines, for example, Google, Yahoo Search, MSN Search, Ask, belong to this class.) Furthermore, our algorithm can be modified to follow the modern object-oriented approach whereby posting lists are viewed as streams equipped with a next method, and the next method for Boolean and other complex queries is built from the next method for primitive terms. In our case we show how to construct a basic sample-next(p) method that samples term posting lists with probability p, and show how to construct sample-next(p) methods for Boolean operators (AND, OR, WAND) from primitive methods. Finally, we test the efficiency and quality of our approach on both synthetic and real-world data. A preliminary version of this work has appeared in [3]. Work performed while A. Anagnostopoulos and A.Z. Broder were at IBM T. J. Watson Research Center.  相似文献   
1000.
In this paper we consider the single machine batch scheduling problem with family setup times and release dates to minimize makespan. We show that this problem is strongly NP-hard, and give an time dynamic programming algorithm and an time dynamic programming algorithm for the problem, where n is the number of jobs, m is the number of families, k is the number of distinct release dates and P is the sum of the setup times of all the families and the processing times of all the jobs. We further give a heuristic with a performance ratio 2. We also give a polynomial-time approximation scheme (PTAS) for the problem.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号