首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   65838篇
  免费   7196篇
  国内免费   4530篇
电工技术   7246篇
技术理论   1篇
综合类   6418篇
化学工业   3430篇
金属工艺   3804篇
机械仪表   8275篇
建筑科学   4556篇
矿业工程   6156篇
能源动力   1319篇
轻工业   3252篇
水利工程   1071篇
石油天然气   1084篇
武器工业   858篇
无线电   5100篇
一般工业技术   3801篇
冶金工业   2508篇
原子能技术   194篇
自动化技术   18491篇
  2024年   327篇
  2023年   977篇
  2022年   1934篇
  2021年   2296篇
  2020年   2324篇
  2019年   1737篇
  2018年   1592篇
  2017年   1949篇
  2016年   2339篇
  2015年   2660篇
  2014年   4501篇
  2013年   3669篇
  2012年   5363篇
  2011年   5598篇
  2010年   4278篇
  2009年   4195篇
  2008年   4044篇
  2007年   4877篇
  2006年   4316篇
  2005年   3563篇
  2004年   2763篇
  2003年   2405篇
  2002年   1921篇
  2001年   1618篇
  2000年   1316篇
  1999年   970篇
  1998年   732篇
  1997年   632篇
  1996年   513篇
  1995年   446篇
  1994年   410篇
  1993年   268篇
  1992年   196篇
  1991年   153篇
  1990年   148篇
  1989年   141篇
  1988年   103篇
  1987年   42篇
  1986年   49篇
  1985年   30篇
  1984年   25篇
  1983年   23篇
  1982年   23篇
  1981年   8篇
  1980年   8篇
  1979年   14篇
  1978年   6篇
  1974年   5篇
  1959年   6篇
  1957年   5篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
121.
制袋机薄膜速度传感方法与定长控制   总被引:2,自引:0,他引:2  
提出了实现薄膜张力传感的薄膜速度传感方法,设计了基于ARM处理器和嵌入式实时操作系统的薄膜定长伺服控制系统。采用巧妙的机械变换技术实现薄膜长度变化的传感,从而建立具有线性特性的薄膜速度传感数学模型。采用ARM7 TDMI核心处理器和μC/OS-II嵌入式实时操作系统平台,研发了薄膜定长伺服控制系统。核心技术成功应用到双列中封高速制袋机的控制系统中,运行速度达到24000 P/h,成品率超过99.6%。  相似文献   
122.
针对如何提取纸币图像特征和提高识别率的问题,综合利用退化四元小波变换具有的相位特性,提出一种基于退化四元小波变换的纸币识别方法.该方法首先对采集的纸币图像进行倾斜校正和边缘检测,然后运用退化四元小波对纸币图像进行分解操作,并对分解系数进行统计分析,将每个分解子带系数的能量和标准差作为该纸币图像的特征向量,最后将支持向量机作为分类器对纸币图像进行识别.本文方法在资源约束的嵌入式清分系统上实现,实验结果表明采用本文提出的算法突破了传统纸币识别系统识别率很难再提高的瓶颈,同时能够满足清分系统的实时性要求.  相似文献   
123.
为满足学生在网络统考课程学习中自主学习的需要,学习支持服务系统的研究、设计构建和有效运行至关重要。各试点高校关于网络统考学习支持服务的实践和做法很多,但系统性研究和理论总结不够,作者结合工作实际尝试构建了网络统考学习支持服务体系基本框架。  相似文献   
124.
针对癫痫发作给病人带来的巨大伤害,为临床治疗留下足够空余时间,提出一个可以预测癫痫发作的系统模型。对21名癫痫病人进行研究,提取具有较低算法复杂度的排列熵构成特征向量,将其输入支持向量机(support vector machine,SVM)训练出学习模型,用来识别发作期样本,利用投票机制充分考虑病人差异来判断所处状态,最终实现癫痫的实时预测。结果表明,其中81%的发作可以提前平均50多分钟预测到,且具有较低的误报率。为癫痫发作预测系统的理论研究打下坚实基础。  相似文献   
125.
Multi-label core vector machine (Rank-CVM) is an efficient and effective algorithm for multi-label classification. But there still exist two aspects to be improved: reducing training and testing computational costs further, and detecting relevant labels effectively. In this paper, we extend Rank-CVM via adding a zero label to construct its variant with a zero label, i.e., Rank-CVMz, which is formulated as the same quadratic programming form with a unit simplex constraint and non-negative ones as Rank-CVM, and then is solved by Frank–Wolfe method efficiently. Attractively, our Rank-CVMz has fewer variables to be solved than Rank-CVM, which speeds up training procedure dramatically. Further, the relevant labels are effectively detected by the zero label. Experimental results on 12 benchmark data sets demonstrate that our method achieves a competitive performance, compared with six existing multi-label algorithms according to six indicative instance-based measures. Moreover, on the average, our Rank-CVMz runs 83 times faster and has slightly fewer support vectors than its origin Rank-CVM.  相似文献   
126.
Handling occlusion is a very challenging problem in object detection. This paper presents a method of learning a hierarchical model for X-to-X occlusion-free object detection (e.g., car-to-car and person-to-person occlusions in our experiments). The proposed method is motivated by an intuitive coupling-and-decoupling strategy. In the learning stage, the pair of occluding X?s (e.g., car pairs or person pairs) is represented directly and jointly by a hierarchical And–Or directed acyclic graph (AOG) which accounts for the statistically significant co-occurrence (i.e., coupling). The structure and the parameters of the AOG are learned using the latent structural SVM (LSSVM) framework. In detection, a dynamic programming (DP) algorithm is utilized to find the best parse trees for all sliding windows with detection scores being greater than the learned threshold. Then, the two single X?s are decoupled from the declared detections of X-to-X occluding pairs together with some non-maximum suppression (NMS) post-processing. In experiments, our method is tested on both a roadside-car dataset collected by ourselves (which will be released with this paper) and two public person datasets, the MPII-2Person dataset and the TUD-Crossing dataset. Our method is compared with state-of-the-art deformable part-based methods, and obtains comparable or better detection performance.  相似文献   
127.
Attributing authorship of documents with unknown creators has been studied extensively for natural language text such as essays and literature, but less so for non‐natural languages such as computer source code. Previous attempts at attributing authorship of source code can be categorised by two attributes: the software features used for the classification, either strings of n tokens/bytes (n‐grams) or software metrics; and the classification technique that exploits those features, either information retrieval ranking or machine learning. The results of existing studies, however, are not directly comparable as all use different test beds and evaluation methodologies, making it difficult to assess which approach is superior. This paper summarises all previous techniques to source code authorship attribution, implements feature sets that are motivated by the literature, and applies information retrieval ranking methods or machine classifiers for each approach. Importantly, all approaches are tested on identical collections from varying programming languages and author types. Our conclusions are as follows: (i) ranking and machine classifier approaches are around 90% and 85% accurate, respectively, for a one‐in‐10 classification problem; (ii) the byte‐level n‐gram approach is best used with different parameters to those previously published; (iii) neural networks and support vector machines were found to be the most accurate machine classifiers of the eight evaluated; (iv) use of n‐gram features in combination with machine classifiers shows promise, but there are scalability problems that still must be overcome; and (v) approaches based on information retrieval techniques are currently more accurate than approaches based on machine learning. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
128.
The alignment of information systems with the business goals of an organisation, although a topic of great importance, is not always properly valued or taken into consideration. In general, managers have different opinions to chief information officers (CIOs) in relation to IS, especially with regard to their importance and value to the business and also in terms of investment needs. Here, we discuss and study new approaches to methods and tools for assessing the relative importance of each information system to business, focusing on the financial sector including banks and insurance companies. We suggest the introduction of new key indicators for better decision support and to identify investment priorities, and present results regarding the relative importance of each process to support the business strategy. The primary goal for the inherent research project is to analyse the main problems and difficulties encountered by IS and IT managers, featuring different players and how they relate. The main contributions of this work are the CRUDi framework as a tool to improve alignment between business and IS strategies and the CRUDi survey and its results qualifying the financial sector's opinion regarding the relative importance of processes and investments.  相似文献   
129.
Spreadsheet programs can be found everywhere in organizations and they are used for a variety of purposes, including financial calculations, planning, data aggregation and decision making tasks. A number of research surveys have however shown that such programs are particularly prone to errors. Some reasons for the error-proneness of spreadsheets are that spreadsheets are developed by end users and that standard software quality assurance processes are mostly not applied. Correspondingly, during the last two decades, researchers have proposed a number of techniques and automated tools aimed at supporting the end user in the development of error-free spreadsheets. In this paper, we provide a review of the research literature and develop a classification of automated spreadsheet quality assurance (QA) approaches, which range from spreadsheet visualization, static analysis and quality reports, over testing and support to model-based spreadsheet development. Based on this review, we outline possible opportunities for future work in the area of automated spreadsheet QA.  相似文献   
130.
Revisiting the medical and social models of disability, this study adopted the integrated biopsychosocial approach to examine experiences of 25 mobility‐impaired respondents in Singapore with using mobile phones. We found that mobile phones provided respondents a greater degree of mobility, a sense of control, and opportunities to escape the stigma of disability, thus challenging the boundaries between the able‐bodied and the disabled. Mobile phone appropriation allowed the management of personal identities and social networks, leading to a sense of empowerment. However, mobile phone usage might act as a double‐edged sword for disabled people, creating mobile dependencies and a spatial narrowing of social connections. Theoretical and practical implications are discussed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号