首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   37373篇
  免费   4605篇
  国内免费   2924篇
电工技术   3555篇
综合类   3650篇
化学工业   2699篇
金属工艺   1676篇
机械仪表   2593篇
建筑科学   6155篇
矿业工程   5321篇
能源动力   1119篇
轻工业   421篇
水利工程   1479篇
石油天然气   1381篇
武器工业   668篇
无线电   2162篇
一般工业技术   3525篇
冶金工业   1554篇
原子能技术   303篇
自动化技术   6641篇
  2024年   163篇
  2023年   468篇
  2022年   1102篇
  2021年   1430篇
  2020年   1456篇
  2019年   1171篇
  2018年   1003篇
  2017年   1296篇
  2016年   1496篇
  2015年   1618篇
  2014年   2804篇
  2013年   2453篇
  2012年   3209篇
  2011年   3336篇
  2010年   2602篇
  2009年   2525篇
  2008年   2397篇
  2007年   2603篇
  2006年   2244篇
  2005年   1775篇
  2004年   1439篇
  2003年   1272篇
  2002年   1033篇
  2001年   761篇
  2000年   581篇
  1999年   474篇
  1998年   392篇
  1997年   344篇
  1996年   277篇
  1995年   222篇
  1994年   213篇
  1993年   136篇
  1992年   109篇
  1991年   85篇
  1990年   67篇
  1989年   62篇
  1988年   51篇
  1987年   25篇
  1986年   20篇
  1985年   32篇
  1984年   22篇
  1983年   21篇
  1982年   9篇
  1980年   9篇
  1979年   18篇
  1963年   5篇
  1961年   5篇
  1960年   8篇
  1959年   9篇
  1955年   8篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
61.
对空调负荷进行准确预测不仅对空调优化控制的意义重大,而且也是实现空调经济运行与节能的关键所在。为了提高建筑空调负荷的预测精度,在分析灰色模型和支持向量机建模特点基础上提出了一种空调负荷组合预测算法。该方法综合了灰色建模计算过程简单以及支持向量机自学习和泛化能力强的优点,能够更加有效地利用样本数据的有效信息,提高模型预测精度。首先,通过灰色建模过程弱化了样本数据的随机因素。然后,对灰色模型输出进行归一化处理及数据重构,以作为支持向量机的输入。最后,通过支持向量机模型的预测得到最终预测结果。将本文所提出的方法应用于福州一栋办公建筑的逐时空调负荷预测中,并与灰色模型及支持向量机模型作比较,证明了组合模型的预测值与实际运行值拟合度最高,平均绝对误差比灰色模型和支持向量机模型分别降低了47.84%和17.39%。该组合预测模型具有较高的预测精度和更好的泛化能力,具有较强的可行性和实用性。  相似文献   
62.
对常用的回归方法进行研究.此类方法虽然几何解释明确、易于求解,但均须事先确定(或假定)变量间的因果关系,再考虑建模,在实际应用中,对于很难确定变量的因果关系的问题,如物联网数据分析,上述方法就会失效.为此,提出一种无需假定因变量的隐目标回归方法.该方法易于核化,可以推广到非线性回归问题.通过人工数据和国际标准数据集上的实验验证了所提算法的有效性.  相似文献   
63.
The objective of this study was to determine how the fracture of adhesive joints depends on elastic beam parameters describing the adherends and the applied loads. The basic specimen geometry was the cracked lap shear joint constructed of aluminium alloy with various adherend and bondline thicknesses. Loads were applied in different combinations of bending, tension and shear to generate a failure envelope for each adhesive and specimen geometry. It was found that crack propagation for precracked specimens occured at a critical strain energy release rate but was also a function of the GI/GII ratio and the bondline thickness. The experiments also showed that the loads required to propagate a crack in a precracked specimen were always lower than the loads required to break the fillet. Hence, by treating uncracked joints as being cracked, where the fictitious crack tip is assumed to coincide with the location of the fillet, a conservative estimate of the failure load is obtained.  相似文献   
64.
基于便携式传感器的模式识别在心电(ECG)监护领域具有广泛的应用前景,并且在心律不齐、心肌梗塞、心室肥大等心电的识别算法上都已有大量的研究与应用,但在心房肥大诊断上却未有模式识别相关的研究成果。心房肥大病症的心电数据量不足给研究造成重大障碍,部分分类器无法适应小样本训练下的分类。针对小样本训练进行研究,对比了不同分类方法,显示了基于统计模式识别的支持向量机(SVM)应用于心房肥大的应用潜力。另外,由于不同个体的心房肥大心电存在差异,在实际应用环境中,SVM存在无法良好泛化的问题,存在类别错分的医学风险。针对类别错分情况,采用分类器融合的方法改进分类器,提出了在SVM分类器输出端增加了拒绝域的分类器(SVM-R)的方法。实验结果表明:SVMR有较高的分类准确率与诊断可信度。  相似文献   
65.
针对如何提取纸币图像特征和提高识别率的问题,综合利用退化四元小波变换具有的相位特性,提出一种基于退化四元小波变换的纸币识别方法.该方法首先对采集的纸币图像进行倾斜校正和边缘检测,然后运用退化四元小波对纸币图像进行分解操作,并对分解系数进行统计分析,将每个分解子带系数的能量和标准差作为该纸币图像的特征向量,最后将支持向量机作为分类器对纸币图像进行识别.本文方法在资源约束的嵌入式清分系统上实现,实验结果表明采用本文提出的算法突破了传统纸币识别系统识别率很难再提高的瓶颈,同时能够满足清分系统的实时性要求.  相似文献   
66.
为满足学生在网络统考课程学习中自主学习的需要,学习支持服务系统的研究、设计构建和有效运行至关重要。各试点高校关于网络统考学习支持服务的实践和做法很多,但系统性研究和理论总结不够,作者结合工作实际尝试构建了网络统考学习支持服务体系基本框架。  相似文献   
67.
Case-based reasoning (CBR) is one of the main forecasting methods in business forecasting, which performs well in prediction and holds the ability of giving explanations for the results. In business failure prediction (BFP), the number of failed enterprises is relatively small, compared with the number of non-failed ones. However, the loss is huge when an enterprise fails. Therefore, it is necessary to develop methods (trained on imbalanced samples) which forecast well for this small proportion of failed enterprises and performs accurately on total accuracy meanwhile. Commonly used methods constructed on the assumption of balanced samples do not perform well in predicting minority samples on imbalanced samples consisting of the minority/failed enterprises and the majority/non-failed ones. This article develops a new method called clustering-based CBR (CBCBR), which integrates clustering analysis, an unsupervised process, with CBR, a supervised process, to enhance the efficiency of retrieving information from both minority and majority in CBR. In CBCBR, various case classes are firstly generated through hierarchical clustering inside stored experienced cases, and class centres are calculated out by integrating cases information in the same clustered class. When predicting the label of a target case, its nearest clustered case class is firstly retrieved by ranking similarities between the target case and each clustered case class centre. Then, nearest neighbours of the target case in the determined clustered case class are retrieved. Finally, labels of the nearest experienced cases are used in prediction. In the empirical experiment with two imbalanced samples from China, the performance of CBCBR was compared with the classical CBR, a support vector machine, a logistic regression and a multi-variant discriminate analysis. The results show that compared with the other four methods, CBCBR performed significantly better in terms of sensitivity for identifying the minority samples and generated high total accuracy meanwhile. The proposed approach makes CBR useful in imbalanced forecasting.  相似文献   
68.
We present a new architecture level unified reliability evaluation methodology for chip multiprocessors (CMPs). The proposed reliability estimation (REST) is based on a Monte Carlo algorithm. What distinguishes REST from the previous work is that both the computational and communication components are considered in a unified manner to compute the reliability of the CMP. We utilize REST tool to develop a new dynamic reliability management (DRM) scheme to address time-dependent dielectric breakdown and negative-bias temperature instability aging mechanisms in network-on-chip (NoC) based CMPs. Designed as a control loop, the proposed DRM scheme uses an effective neural network based reliability estimation module. The neural-network predictor is trained using the REST tool. We investigate how system’s lifetime changes when the NoC as the communication unit of the CMP is considered or not during the reliability evaluation process and find that differences can be as high as 60%. Full-system based simulations using a customized GEM5 simulator show that reliability can be improved by up to 52% using the proposed DRM scheme in a best-effort scenario with 2–9% performance penalty (using a user set target lifetime of 7 years) over the case when no DRM is employed.  相似文献   
69.
The alignment of information systems with the business goals of an organisation, although a topic of great importance, is not always properly valued or taken into consideration. In general, managers have different opinions to chief information officers (CIOs) in relation to IS, especially with regard to their importance and value to the business and also in terms of investment needs. Here, we discuss and study new approaches to methods and tools for assessing the relative importance of each information system to business, focusing on the financial sector including banks and insurance companies. We suggest the introduction of new key indicators for better decision support and to identify investment priorities, and present results regarding the relative importance of each process to support the business strategy. The primary goal for the inherent research project is to analyse the main problems and difficulties encountered by IS and IT managers, featuring different players and how they relate. The main contributions of this work are the CRUDi framework as a tool to improve alignment between business and IS strategies and the CRUDi survey and its results qualifying the financial sector's opinion regarding the relative importance of processes and investments.  相似文献   
70.
Spreadsheet programs can be found everywhere in organizations and they are used for a variety of purposes, including financial calculations, planning, data aggregation and decision making tasks. A number of research surveys have however shown that such programs are particularly prone to errors. Some reasons for the error-proneness of spreadsheets are that spreadsheets are developed by end users and that standard software quality assurance processes are mostly not applied. Correspondingly, during the last two decades, researchers have proposed a number of techniques and automated tools aimed at supporting the end user in the development of error-free spreadsheets. In this paper, we provide a review of the research literature and develop a classification of automated spreadsheet quality assurance (QA) approaches, which range from spreadsheet visualization, static analysis and quality reports, over testing and support to model-based spreadsheet development. Based on this review, we outline possible opportunities for future work in the area of automated spreadsheet QA.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号