首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   40098篇
  免费   4670篇
  国内免费   3697篇
电工技术   3318篇
技术理论   10篇
综合类   5650篇
化学工业   633篇
金属工艺   351篇
机械仪表   1875篇
建筑科学   2504篇
矿业工程   1141篇
能源动力   445篇
轻工业   530篇
水利工程   2000篇
石油天然气   506篇
武器工业   907篇
无线电   6278篇
一般工业技术   1907篇
冶金工业   1481篇
原子能技术   84篇
自动化技术   18845篇
  2024年   175篇
  2023年   462篇
  2022年   933篇
  2021年   1117篇
  2020年   1242篇
  2019年   938篇
  2018年   861篇
  2017年   1140篇
  2016年   1362篇
  2015年   1531篇
  2014年   2858篇
  2013年   2603篇
  2012年   3201篇
  2011年   3224篇
  2010年   2489篇
  2009年   2736篇
  2008年   2934篇
  2007年   3161篇
  2006年   2876篇
  2005年   2519篇
  2004年   2188篇
  2003年   1909篇
  2002年   1370篇
  2001年   1121篇
  2000年   864篇
  1999年   565篇
  1998年   384篇
  1997年   310篇
  1996年   256篇
  1995年   204篇
  1994年   187篇
  1993年   141篇
  1992年   94篇
  1991年   75篇
  1990年   44篇
  1989年   57篇
  1988年   42篇
  1987年   25篇
  1986年   23篇
  1985年   27篇
  1984年   36篇
  1983年   28篇
  1982年   16篇
  1981年   13篇
  1980年   17篇
  1979年   16篇
  1977年   8篇
  1965年   10篇
  1964年   11篇
  1961年   7篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
131.
该文以运行在银行端国库信息系统为例,以提高生产运行过程中运维自动化程度为目的,研究了系统运行中的联机交易、批量运行、应用可用性等特点.在对监控需求进行分析的基础上,设计了数据采集层、数据展示层、应用展示层3层系统监控结构,对采集到的运行数据进行处理.通过Web服务器、邮件系统、消息服务平台进行展示和报警,目前该系统设计可以充分满足实际运维需要,保障了国库信息系统的安全正常运行.  相似文献   
132.
针对建筑信息模型(Building Information Model,BIM)的共享数据标准为IFC标准,但现有软件导出的IFC格式模型缺少关键数据,不能满足结构非线性抗震分析数据需求的问题,提出建筑抗震弹塑性分析数据基于IFC标准的表达方法,编写C++转换程序将BIM包含的数据正确地转换成不同软件(如Marc和OpenSees等)的模型,从而可以使用不同的结构分析软件实现建筑抗震弹塑性分析.该方法为基于BIM数据实现建筑抗震弹塑性分析提供参考.  相似文献   
133.
134.
为提高重点建设项目的管理水平和工作效率,在详细分析重点建设项目的业务流程和用户需求的基础上,提出基于Web的重点建设项目信息管理系统。该系统采用典型的三层体系架构,具有较强的可扩展性和灵活性。通过该系统对重点建设项目信息科学管理,可以有效克服传统管理方式的弊端,极大地提高重点建设项目信息管理工作的效率。  相似文献   
135.
针对基于分布的中文词表示构造过程中的参数选择问题进行了系统性的研究。选择了六种参数进行对比实验,在中文语义相似度任务上对不同参数设置下得到的中文词表示的质量进行了评估。实验结果表明,通过选择合适的参数,基于分布的词表示在中文语义相似度任务上能够得到较高的性能,而且,这种高维的词分布表示的质量甚至优于目前流行的基于神经网络(Skip-gram)或矩阵分解(GloVe)得到的低维的词表示。  相似文献   
136.
137.
Recently, many local-feature based methods have been proposed for feature learning to obtain a better high-level representation of human behavior. Most of the previous research ignores the structural information existing among local features in the same video sequences, while it is an important clue to distinguish ambiguous actions. To address this issue, we propose a Laplacian group sparse coding for human behavior representation. Unlike traditional methods such as sparse coding, our approach prefers to encode a group of relevant features simultaneously and meanwhile allow as less atoms as possible to participate in the approximation so that video-level sparsity is guaranteed. By incorporating Laplacian regularization the method is capable to ensure the similar approximation of closely related local features and the structural information is successfully preserved. Thus, a compact but discriminative human behavior representation is achieved. Besides, the objective of our model is solved with a closed-form solution, which reduces the computational cost significantly. Promising results on several popular benchmark datasets prove the efficiency and effectiveness of our approach.  相似文献   
138.
Attributing authorship of documents with unknown creators has been studied extensively for natural language text such as essays and literature, but less so for non‐natural languages such as computer source code. Previous attempts at attributing authorship of source code can be categorised by two attributes: the software features used for the classification, either strings of n tokens/bytes (n‐grams) or software metrics; and the classification technique that exploits those features, either information retrieval ranking or machine learning. The results of existing studies, however, are not directly comparable as all use different test beds and evaluation methodologies, making it difficult to assess which approach is superior. This paper summarises all previous techniques to source code authorship attribution, implements feature sets that are motivated by the literature, and applies information retrieval ranking methods or machine classifiers for each approach. Importantly, all approaches are tested on identical collections from varying programming languages and author types. Our conclusions are as follows: (i) ranking and machine classifier approaches are around 90% and 85% accurate, respectively, for a one‐in‐10 classification problem; (ii) the byte‐level n‐gram approach is best used with different parameters to those previously published; (iii) neural networks and support vector machines were found to be the most accurate machine classifiers of the eight evaluated; (iv) use of n‐gram features in combination with machine classifiers shows promise, but there are scalability problems that still must be overcome; and (v) approaches based on information retrieval techniques are currently more accurate than approaches based on machine learning. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
139.
This paper addresses a multi-supplier, multi-affected area, multi-relief, and multi-vehicle relief allocation problem in disaster relief logistics. A multi-objective optimisation model based on disaster scenario information updates is proposed in an attempt to coordinate efficiency and equity through timely and appropriate decisions regarding issues such as vehicle routing and relief allocation. An optimal stopping rule is also proposed to determine the optimum period of delay before responding to disaster, because decision making requires accurate disaster information. The main contribution of this paper is solving relief allocation problem in a novel way by correlating operational research with statistical decision making and Bayesian sequential analysis. Finally, a case is presented based on the post-disaster rescue in Eastern China after supertyphoon Saomai to test the applicability and show the potential advantages of the proposed model.  相似文献   
140.
Product development of today is becoming increasingly knowledge intensive. Specifically, design teams face considerable challenges in making effective use of increasing amounts of information. In order to support product information retrieval and reuse, one approach is to use case-based reasoning (CBR) in which problems are solved “by using or adapting solutions to old problems.” In CBR, a case includes both a representation of the problem and a solution to that problem. Case-based reasoning uses similarity measures to identify cases which are more relevant to the problem to be solved. However, most non-numeric similarity measures are based on syntactic grounds, which often fail to produce good matches when confronted with the meaning associated to the words they compare. To overcome this limitation, ontologies can be used to produce similarity measures that are based on semantics. This paper presents an ontology-based approach that can determine the similarity between two classes using feature-based similarity measures that replace features with attributes. The proposed approach is evaluated against other existing similarities. Finally, the effectiveness of the proposed approach is illustrated with a case study on product–service–system design problems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号