首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   41938篇
  免费   3640篇
  国内免费   2454篇
电工技术   3467篇
技术理论   1篇
综合类   3759篇
化学工业   1990篇
金属工艺   4396篇
机械仪表   8695篇
建筑科学   1235篇
矿业工程   2104篇
能源动力   631篇
轻工业   3035篇
水利工程   389篇
石油天然气   661篇
武器工业   326篇
无线电   2609篇
一般工业技术   2623篇
冶金工业   1558篇
原子能技术   90篇
自动化技术   10463篇
  2024年   186篇
  2023年   746篇
  2022年   1313篇
  2021年   1451篇
  2020年   1431篇
  2019年   1038篇
  2018年   927篇
  2017年   1183篇
  2016年   1396篇
  2015年   1569篇
  2014年   2640篇
  2013年   2160篇
  2012年   3232篇
  2011年   3315篇
  2010年   2394篇
  2009年   2373篇
  2008年   2146篇
  2007年   2859篇
  2006年   2713篇
  2005年   2317篇
  2004年   1855篇
  2003年   1641篇
  2002年   1378篇
  2001年   1172篇
  2000年   956篇
  1999年   757篇
  1998年   568篇
  1997年   467篇
  1996年   367篇
  1995年   333篇
  1994年   269篇
  1993年   186篇
  1992年   128篇
  1991年   100篇
  1990年   87篇
  1989年   94篇
  1988年   79篇
  1987年   34篇
  1986年   29篇
  1985年   21篇
  1984年   16篇
  1983年   24篇
  1982年   18篇
  1981年   6篇
  1980年   6篇
  1979年   8篇
  1978年   6篇
  1963年   4篇
  1959年   4篇
  1957年   3篇
排序方式: 共有10000条查询结果,搜索用时 640 毫秒
891.
为解决传统污秽检测方法对输电线路绝缘子污闪防治的局限性,采用非接触式、高分辨率的高光谱成像技术对污秽在线检测技术进行研究。为有效提取反应污秽度的光谱特征并削弱冗余与干扰信息的影响,提出一种小波包能量谱特征优化的绝缘子污秽等级识别技术。首先,对不同污秽度的绝缘子样品的光谱图像进行背景分割,提取均匀覆污区像素点光谱均值曲线;其次,对不同图像的光强均匀度差异、环境噪声进行预处理,并通过导数变换提升不同污秽等级间的可区分性。再次,对预处理后的谱线进行小波能量谱特征提取。最后,基于所提特征建立基于支持向量机(support vector machines, SVM)的污秽等级识别模型。实验结果表明,相比于采用全波段数据或PCA特征数据作为输入,基于小波能量谱特征建立的支持向量机(SVM)污秽等级识别模型对样品识别准确率达到99.8%。#$NL关键词:关高光谱成像;绝缘子污秽等级;小波包能量谱;支持向量机#$NL中图分类号:TM933  相似文献   
892.
风光等新能源电站出力具有间歇性和波动性,合理的风光容量配比可以充分实现二者的互补。不准确的电站理论功率计算会影响风光真实特征的提取,进而导致较大的容量配比误差。本文在新能源电站理论功率计算所通常采用的样板机法基础上进行改进,首先对异常数据进行识别及重构,然后识别新能源电站中的异常样板机并更新样板机集合,进一步根据非样板机的实际运行情况选取动态信息窗,利用动态信息窗内样板机和非样板机的实测功率,对非样板机分组并动态识别每组非样板机的比例系数,从而计算新能源电站的理论功率。基于多年历史理论功率对新能源电站进行特征分析,模拟随机出力场景,并进行场景筛选,建立了基于源荷不匹配风险的风光配比优化方法。通过算例验证了改进样板机法的准确性,利用该方法得到西北某地区电网风、光伏电站多年的改进理论功率数据,并优化得到该地区的风光最优配比。  相似文献   
893.
直线永磁磁通切换(LFSPM)电机功率密度高,定子结构简单可靠,在长行程直线牵引场合具有很大的应用潜力。然而很多应用场合在要求直线电机具有高推力密度的同时也能够呈现较低的推力脉动。因此研究推力脉动的产生机理以及抑制方法是提升LFSPM电机应用潜力的重要手段。利用有限元软件对抑制不同成因推力脉动的方法以及这些方法的组合效果进行仿真比较,比较过程中,给出了一种的步进(错齿)位移选择方法,基于该方法,各结构在减小推力脉动的同时可以有效兼顾电机的输出推力平均值。最后对比较结果进行了分析和总结,得出有价值的结论。  相似文献   
894.
朱晨光  於锋  罗潇  吴晓新 《电源学报》2022,20(3):133-143
为降低传统三电平供电的永磁同步电机模型预测控制系统的开关频率,提出一种计及开关频率分区优化的PMSM三电平模型预测控制方法。首先,将空间电压矢量平面划分为12个扇区,并且根据上一时刻电压矢量所在位置,选择相邻扇区的电压矢量作为备选矢量。其次,加入对开关状态切换的限制,每个开关周期仅允许一相开关状态发生连续性跳变,减少备选矢量数量的同时能够有效降低开关频率。进一步,利用具有相同空间位置但对中点电位影响相反的正负冗余小矢量来平衡中点电位。最后,通过仿真和实验,验证该控制策略的有效性。  相似文献   
895.
铁路接触网绝缘子状态检测对铁路行车安全有着 重大的意义,为解决目前人工对绝缘 子图像检测结果的不确定性,提出一种深度学习结合灰度纹理特征的检测方法。首先使用 Faster R-CNN (faster region-based convolutional neural network)目标检测算法对图像中绝缘子精确识别,再通过灰度共生矩阵对绝缘子纹理 特征进行分析提取,之后结合支持向量机将绝缘子分为正常绝缘子和异常绝缘子,实验数 据结果证明使用能量、熵、相关度3种纹理特征进行绝缘子状态分类时对实验数据中的正 常状态绝缘子的分类精度可达100%,异常状态绝缘子的分类精度达97.5%,最后依据绝缘 子图像灰度分布的周期性特点,利用灰度积分投影将异常绝缘子分为破损绝缘子和夹杂异 物绝缘子。实验结果表明所提方法可以有效对绝缘子状态进行检测分类。  相似文献   
896.
For businesses to benefit from the many opportunities of cloud computing, they must first address a number of security challenges, such as the potential leakage of confidential data to unintended third parties. An inter-VM (where VM is virtual machine) attack, also known as cross-VM attack, is one threat through which cloud-hosted confidential data could be leaked to unintended third parties. An inter-VM attack exploits vulnerabilities between co-resident guest VMs that share the same cloud infrastructure. In an attempt to stop such an attack, this paper uses the principles of logical analysis to model a solution that provides physical separation of VMs belonging to conflicting tenants based on their levels of conflict. The derived mathematical model is founded on scientific principles and implemented using four conflict-aware VM placement algorithms. The resultant algorithms consider a tenant's risk appetite and cost implications. The model offers guidance to VM placement and is validated using a proof of concept. A cloud simulation tool was used to test and evaluate the effectiveness and efficiency of the model. The findings reflect that the introduction of the proposed model introduced a time lag in the time it took to place VM instances. On top of this, it was also discovered that the number and size of the VM instances has an effect on the VM placement performance. The findings further illustrate that the conflict tolerance level of a VM has a direct impact on the time it took to place.  相似文献   
897.
The most common form of cancer for women is breast cancer. Recent advances in medical imaging technologies increase the use of digital mammograms to diagnose breast cancer. Thus, an automated computerized system with high accuracy is needed. In this study, an efficient Deep Learning Architecture (DLA) with a Support Vector Machine (SVM) is designed for breast cancer diagnosis. It combines the ideas from DLA with SVM. The state-of-the-art Visual Geometric Group (VGG) architecture with 16 layers is employed in this study as it uses the small size of 3 × 3 convolution filters that reduces system complexity. The softmax layer in VGG assumes that the training samples belong to exactly only one class, which is not valid in a real situation, such as in medical image diagnosis. To overcome this situation, SVM is employed instead of the softmax layer in VGG. Data augmentation is also employed as DLA usually requires a large number of samples. VGG model with different SVM kernels is built to classify the mammograms. Results show that the VGG-SVM model has good potential for the classification of Mammographic Image Analysis Society (MIAS) database images with an accuracy of 98.67%, sensitivity of 99.32%, and specificity of 98.34%.  相似文献   
898.
In today’s world, Cloud Computing (CC) enables the users to access computing resources and services over cloud without any need to own the infrastructure. Cloud Computing is a concept in which a network of devices, located in remote locations, is integrated to perform operations like data collection, processing, data profiling and data storage. In this context, resource allocation and task scheduling are important processes which must be managed based on the requirements of a user. In order to allocate the resources effectively, hybrid cloud is employed since it is a capable solution to process large-scale consumer applications in a pay-by-use manner. Hence, the model is to be designed as a profit-driven framework to reduce cost and make span. With this motivation, the current research work develops a Cost-Effective Optimal Task Scheduling Model (CEOTS). A novel algorithm called Target-based Cost Derivation (TCD) model is used in the proposed work for hybrid clouds. Moreover, the algorithm works on the basis of multi-intentional task completion process with optimal resource allocation. The model was successfully simulated to validate its effectiveness based on factors such as processing time, make span and efficient utilization of virtual machines. The results infer that the proposed model outperformed the existing works and can be relied in future for real-time applications.  相似文献   
899.
Data available in software engineering for many applications contains variability and it is not possible to say which variable helps in the process of the prediction. Most of the work present in software defect prediction is focused on the selection of best prediction techniques. For this purpose, deep learning and ensemble models have shown promising results. In contrast, there are very few researches that deals with cleaning the training data and selection of best parameter values from the data. Sometimes data available for training the models have high variability and this variability may cause a decrease in model accuracy. To deal with this problem we used the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) for selection of the best variables to train the model. A simple ANN model with one input, one output and two hidden layers was used for the training instead of a very deep and complex model. AIC and BIC values are calculated and combination for minimum AIC and BIC values to be selected for the best model. At first, variables were narrowed down to a smaller number using correlation values. Then subsets for all the possible variable combinations were formed. In the end, an artificial neural network (ANN) model was trained for each subset and the best model was selected on the basis of the smallest AIC and BIC value. It was found that combination of only two variables’ ns and entropy are best for software defect prediction as it gives minimum AIC and BIC values. While, nm and npt is the worst combination and gives maximum AIC and BIC values.  相似文献   
900.
Data augmentation (DA) is a ubiquitous approach for several text generation tasks. Intuitively, in the machine translation paradigm, especially in low-resource languages scenario, many DA methods have appeared. The most commonly used methods are building pseudocorpus by randomly sampling, omitting, or replacing some words in the text. However, previous approaches hardly guarantee the quality of augmented data. In this study, we try to augment the corpus by introducing a constrained sampling method. Additionally, we also build the evaluation framework to select higher quality data after augmentation. Namely, we use the discriminator submodel to mitigate syntactic and semantic errors to some extent. Experimental results show that our augmentation method consistently outperforms all the previous state-of-the-art methods on both small and large-scale corpora in eight language pairs from four corpora by 2.38–4.18 bilingual evaluation understudy points.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号