首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   43189篇
  免费   4304篇
  国内免费   2882篇
电工技术   3668篇
技术理论   1篇
综合类   5013篇
化学工业   2328篇
金属工艺   3469篇
机械仪表   6951篇
建筑科学   1506篇
矿业工程   1981篇
能源动力   668篇
轻工业   3253篇
水利工程   641篇
石油天然气   700篇
武器工业   374篇
无线电   3109篇
一般工业技术   3397篇
冶金工业   1891篇
原子能技术   187篇
自动化技术   11238篇
  2024年   194篇
  2023年   770篇
  2022年   1395篇
  2021年   1582篇
  2020年   1555篇
  2019年   1182篇
  2018年   1038篇
  2017年   1290篇
  2016年   1493篇
  2015年   1691篇
  2014年   2756篇
  2013年   2341篇
  2012年   3208篇
  2011年   3352篇
  2010年   2485篇
  2009年   2479篇
  2008年   2309篇
  2007年   2981篇
  2006年   2822篇
  2005年   2386篇
  2004年   1926篇
  2003年   1642篇
  2002年   1373篇
  2001年   1198篇
  2000年   978篇
  1999年   743篇
  1998年   573篇
  1997年   487篇
  1996年   382篇
  1995年   341篇
  1994年   297篇
  1993年   202篇
  1992年   144篇
  1991年   132篇
  1990年   117篇
  1989年   115篇
  1988年   93篇
  1987年   39篇
  1986年   44篇
  1985年   27篇
  1984年   15篇
  1983年   26篇
  1982年   20篇
  1981年   9篇
  1980年   9篇
  1979年   18篇
  1978年   13篇
  1959年   11篇
  1955年   12篇
  1954年   7篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
The most common form of cancer for women is breast cancer. Recent advances in medical imaging technologies increase the use of digital mammograms to diagnose breast cancer. Thus, an automated computerized system with high accuracy is needed. In this study, an efficient Deep Learning Architecture (DLA) with a Support Vector Machine (SVM) is designed for breast cancer diagnosis. It combines the ideas from DLA with SVM. The state-of-the-art Visual Geometric Group (VGG) architecture with 16 layers is employed in this study as it uses the small size of 3 × 3 convolution filters that reduces system complexity. The softmax layer in VGG assumes that the training samples belong to exactly only one class, which is not valid in a real situation, such as in medical image diagnosis. To overcome this situation, SVM is employed instead of the softmax layer in VGG. Data augmentation is also employed as DLA usually requires a large number of samples. VGG model with different SVM kernels is built to classify the mammograms. Results show that the VGG-SVM model has good potential for the classification of Mammographic Image Analysis Society (MIAS) database images with an accuracy of 98.67%, sensitivity of 99.32%, and specificity of 98.34%.  相似文献   
992.
In today’s world, Cloud Computing (CC) enables the users to access computing resources and services over cloud without any need to own the infrastructure. Cloud Computing is a concept in which a network of devices, located in remote locations, is integrated to perform operations like data collection, processing, data profiling and data storage. In this context, resource allocation and task scheduling are important processes which must be managed based on the requirements of a user. In order to allocate the resources effectively, hybrid cloud is employed since it is a capable solution to process large-scale consumer applications in a pay-by-use manner. Hence, the model is to be designed as a profit-driven framework to reduce cost and make span. With this motivation, the current research work develops a Cost-Effective Optimal Task Scheduling Model (CEOTS). A novel algorithm called Target-based Cost Derivation (TCD) model is used in the proposed work for hybrid clouds. Moreover, the algorithm works on the basis of multi-intentional task completion process with optimal resource allocation. The model was successfully simulated to validate its effectiveness based on factors such as processing time, make span and efficient utilization of virtual machines. The results infer that the proposed model outperformed the existing works and can be relied in future for real-time applications.  相似文献   
993.
Data available in software engineering for many applications contains variability and it is not possible to say which variable helps in the process of the prediction. Most of the work present in software defect prediction is focused on the selection of best prediction techniques. For this purpose, deep learning and ensemble models have shown promising results. In contrast, there are very few researches that deals with cleaning the training data and selection of best parameter values from the data. Sometimes data available for training the models have high variability and this variability may cause a decrease in model accuracy. To deal with this problem we used the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) for selection of the best variables to train the model. A simple ANN model with one input, one output and two hidden layers was used for the training instead of a very deep and complex model. AIC and BIC values are calculated and combination for minimum AIC and BIC values to be selected for the best model. At first, variables were narrowed down to a smaller number using correlation values. Then subsets for all the possible variable combinations were formed. In the end, an artificial neural network (ANN) model was trained for each subset and the best model was selected on the basis of the smallest AIC and BIC value. It was found that combination of only two variables’ ns and entropy are best for software defect prediction as it gives minimum AIC and BIC values. While, nm and npt is the worst combination and gives maximum AIC and BIC values.  相似文献   
994.
Data augmentation (DA) is a ubiquitous approach for several text generation tasks. Intuitively, in the machine translation paradigm, especially in low-resource languages scenario, many DA methods have appeared. The most commonly used methods are building pseudocorpus by randomly sampling, omitting, or replacing some words in the text. However, previous approaches hardly guarantee the quality of augmented data. In this study, we try to augment the corpus by introducing a constrained sampling method. Additionally, we also build the evaluation framework to select higher quality data after augmentation. Namely, we use the discriminator submodel to mitigate syntactic and semantic errors to some extent. Experimental results show that our augmentation method consistently outperforms all the previous state-of-the-art methods on both small and large-scale corpora in eight language pairs from four corpora by 2.38–4.18 bilingual evaluation understudy points.  相似文献   
995.
颅骨修复技术是对有缺损的颅骨补全对应的缺损部分,进而实现颅骨形状的完整性。针对高维颅骨数据,采用径向曲线来表示颅骨几何特征,结合最小二乘支持向量回归的方法构建颅骨修复模型。提取完整的三维颅骨模型的径向曲线,将其分为已有径向曲线和缺失径向曲线两部分作为训练样本,采用最小二乘支持向量回归统计模型复原出待修复颅骨的缺失径向曲线,进而合并生成待修复颅骨的完整径向曲线,通过迭代最近点算法将合并的颅骨径向曲线与颅骨统计模型进行匹配生成完整的三维颅骨模型。实验结果表明,该方法的平均误差达到6.834×10-3,比主成分分析方法降低2.90倍,具有更好的修复效果。  相似文献   
996.
刘鹏  叶润  闫斌  谢茜  刘睿 《计算机工程》2022,48(2):92-98+105
深度回声状态网络是回声状态网络与深度学习思想的结合,合理选取不同谱半径的内部状态矩阵和弱积分参数能有效增强深度回声状态网络的多尺度时域特性。利用数据可视化分析输出矩阵在不同网络层中的分布关系,发现高层网络中部分神经元处于饱和工作状态且该状态抑制了网络动态预测能力。提出一种深度回声状态网络的输入矩阵自适应算法,在对网络内部状态的均值和方差进行递推估计的基础上判断神经元饱和状态,通过自适应调整各层输入权重的值来增强神经元动态性。数值计算结果表明,基于输入尺度自适应算法的深度回声状态网络相对同等规模的单层回声状态网络对于动态系统的预测精度有成倍提升。  相似文献   
997.
Nakagami信道通过不同的衰落因子m可以仿真不同的信道衰落环境,仿真数据与实际测量值吻合度较高,在信道仿真领域得到广泛应用。然而,目前针对Nakagami信道模型的可信性研究较少,缺少科学的比对验证方法。根据典型Nakagami信道的一阶包络序列服从Nakagami分布这一信道统计特性,提出一种基于Cramer-von Mises (CvM)算法的拟合优度检验方法。使用“高斯+瑞利+直流”组合法建立Nakagami衰落信道模型,得到信道输出序列并从中提取包络序列。在此基础上,利用双样本CvM检验算法对包络序列的理论分布和实际分布进行拟合优度检验,实现对Nakagami信道模型的可信性评估。半实物仿真结果表明,与K-S检验、卡方检验和Z检验+卡方检验融合检验算法相比,CvM针对不同m下的Nakagami衰落信道均具有较好的识别性能,同时在可靠性和复杂度方面也具有优势,其对虚警概率为0.01以下的Nakagami衰落信道识别准确率达到92.6%,对样本长度为300 000以上的Nakagami衰落信道平均识别准确率达到96.4%,而当待检验信道为其他信道时,不存在误识别的情况。  相似文献   
998.
目前在液压支架的虚拟仿真过程中对四连杆机构、顶梁和立柱之间的联动还缺乏比较有效的表达方法。针对这种情况,提出了一种基于解析法与虚拟现实技术相结合的液压支架部件无缝联动方法。首先根据某型号液压支架进行UG建模并进行模型修补,运用解析法对关键尺寸进行姿态解算。之后导入Unity3d中建立"父子"关系,并在C#环境下进行协同联动程序编写,建立基于有限状态机理论的运动状态任务与关系,同时建立GUI界面与虚拟手操作两种交互方法。最后进行实际应用和测试,验证了无缝联动方法的可行性和有效性,为液压支架的联动仿真和基于3D的虚拟状态监测及控制提供理论支撑和应用实践。  相似文献   
999.
构建生态文明统计核算体系是生态文明建设的基础性工作,可为生态文明建设进程的监测、评估和决策提供可靠的数据支撑。虽然我国目前已有一定的资源环境统计基础,但由于缺少生态文明统计核算的顶层设计,因此无法将这些数据有效整合于统一框架。本文从生态文明建设的实际需求出发,分析了我国在生态文明统计核算中存在的问题,在结合SEEA2012–CF的基础上构建了我国生态文明统计指标体系和核算框架,并就进一步完善我国生态文明统计核算体系提出建议。  相似文献   
1000.
《工程(英文)》2017,3(6):880-887
As a result of a sustained drought in the Southwestern United States, and in order to maintain existing water capacity in the Las Vegas Valley, the Southern Nevada Water Authority constructed a new deep-water intake (Intake No. 3) located in Lake Mead. The project included a 185 m deep shaft, 4.7 km tunnel under very difficult geological conditions, and marine works for a submerged intake. This paper presents the experience that was gained during the design and construction and the innovative solutions that were developed to handle the difficult conditions that were encountered during tunneling with a dual-mode slurry tunnel-boring machine (TBM) in up to 15 bar (1 bar = 105 Pa) pressure. Specific attention is given to the main challenges that were overcome during the TBM excavation, which included the mode of operation, face support pressures, pre-excavation grouting, and maintenance; to the construction of the intake, which involved deep underwater shaft excavation with blasting using shaped charges; to the construction of the innovative over 1200 t concrete-and-steel intake structure; to the placement of the intake structure in the underwater shaft; and to the docking and connection to an intake tunnel excavated by hybrid TBM.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号