全文获取类型
收费全文 | 33608篇 |
免费 | 3236篇 |
国内免费 | 2584篇 |
专业分类
电工技术 | 1837篇 |
技术理论 | 1篇 |
综合类 | 2478篇 |
化学工业 | 1987篇 |
金属工艺 | 572篇 |
机械仪表 | 1608篇 |
建筑科学 | 978篇 |
矿业工程 | 279篇 |
能源动力 | 640篇 |
轻工业 | 594篇 |
水利工程 | 292篇 |
石油天然气 | 250篇 |
武器工业 | 161篇 |
无线电 | 6386篇 |
一般工业技术 | 2018篇 |
冶金工业 | 545篇 |
原子能技术 | 111篇 |
自动化技术 | 18691篇 |
出版年
2024年 | 151篇 |
2023年 | 524篇 |
2022年 | 718篇 |
2021年 | 832篇 |
2020年 | 922篇 |
2019年 | 822篇 |
2018年 | 764篇 |
2017年 | 1006篇 |
2016年 | 1173篇 |
2015年 | 1401篇 |
2014年 | 1913篇 |
2013年 | 2247篇 |
2012年 | 2128篇 |
2011年 | 2465篇 |
2010年 | 1954篇 |
2009年 | 2409篇 |
2008年 | 2485篇 |
2007年 | 2448篇 |
2006年 | 2115篇 |
2005年 | 1763篇 |
2004年 | 1496篇 |
2003年 | 1352篇 |
2002年 | 1119篇 |
2001年 | 944篇 |
2000年 | 854篇 |
1999年 | 683篇 |
1998年 | 596篇 |
1997年 | 503篇 |
1996年 | 392篇 |
1995年 | 298篇 |
1994年 | 222篇 |
1993年 | 188篇 |
1992年 | 126篇 |
1991年 | 72篇 |
1990年 | 56篇 |
1989年 | 39篇 |
1988年 | 33篇 |
1987年 | 21篇 |
1986年 | 17篇 |
1985年 | 43篇 |
1984年 | 38篇 |
1983年 | 35篇 |
1982年 | 40篇 |
1981年 | 4篇 |
1980年 | 2篇 |
1979年 | 6篇 |
1978年 | 3篇 |
1976年 | 2篇 |
1958年 | 1篇 |
1951年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
981.
982.
针对目前视网膜血管分割任务中伪标签质量参差不齐,获得高质量的伪标签需要经过筛选的问题,本文提出了一种新的用于视网膜血管分割的半监督深度学习框架。该框架采用分而治之的思想来处理数据,针对有标签数据,采用传统的深度学习方法;针对无标签数据,采用Mean teacher模型,通过对比同一输入的不同形态输出,让模型学习无标签数据之间的共同特征,避免了采用伪标签技术带来的筛选过程。本文将U型网络(u-neural networks,U-Net)、Dense-Net和Ladder-Net 3个基准网络放入该框架,在DRIVE和CHASEDB1数据集上进行训练测试,均取得了较好的分割效果,表明本文框架具有提高网络区分不同阈值像素的能力。 相似文献
983.
针对Z向转发(ZF)协作所有中继节点均参与协作转发导致的能耗利用不合理问题, 该文提出了一种适用于多中继场景下的门限辅助判决快速Z转发(DT-FZF)协作.当中继节点处接收信号对数似然比(LLR)的绝对值小于门限时,中继节点不参与协作转发;否则中继节点协作转发经截断后的对数似然比(LLR).放大转发(AF)、译码转发(DF)、分段转发(PF)和ZF协作可看作DT-FZF协作的特殊情况.在三中继系统,误比特率(BER)为10–3时,相比ZF协作,所提协作可获得约0.8 dB的性能增益. 相似文献
984.
传统雷达系统在当今日益复杂的电磁环境中面临严重挑战,而集成微波光子学技术可突破传统雷达的技术瓶颈,具有大带宽、高分辨率、高复用度、高集成化等技术优势。本文基于集成微波光子技术,研制了一款硅基集成二维光控多波束形成系统样机,提出了一种基于二氧化硅平面光波导的片上集成二维光控多波束形成系统架构,结合流片加工平台完成了关键光芯片的设计流片和封装测试,最终完成系统样机整机联调测试,并对实验结果进行理论计算处理,验证了硅基集成波束形成系统的关键性能指标。系统具有大瞬时带宽、二维同时多波束、各波束多波位独立扫描的能力,与此同时兼具硅基光子集成技术的小型化、轻量化、低成本等优势,试验结果验证了集成微波光子技术应用于雷达系统的先进性和应用潜力。 相似文献
985.
In many wireless sensor network (WSN) applications, the location of a sensor node is crucial for determining where the event or situation of interest occurred. Therefore, localization is one of the critical challenges in WSNs. Mobile anchor node assisted localization (MANAL) is one of the promising solutions for the localization of statically deployed sensors. The main problem in MANAL localization is that the path planning of the mobile anchor (MA) node should be done so that the localization error in the network will be minimal and that all unknown nodes in the network are covered. This paper proposes a new path planning approach called nested hexagons curves (NHexCurves) for MANAL. NHexCurves guarantees that it will receive messages from at least three non-collinear anchors to locate all unknown nodes in the network. The proposed model has compared six different path planning schemes in the literature using weighted centroid localization (WCL). In these comparisons, first of all, localization errors of the models are compared using some statistical concepts. Second, the variation of the localization error according to parameters such as resolution (R) and the standard deviation of noise (σ) is observed. Then, with similar approaches, the standard deviation of errors, localization ratio, scalability performances, and finally, path lengths of the models are examined. The simulation results show that the NHexCurves static path planning model proposed in this study stands out compared to other models with high localization error and localization ratio performance, especially at low resolutions, due to its path design. At the same time, the lowest error values according to σ are obtained with the proposed model among all models considered. 相似文献
986.
Learning-based shadow detection methods have achieved an impressive performance, while these works still struggle on complex scenes, especially ambiguous soft shadows. To tackle this issue, this work proposes an efficient shadow detection network (ESDNet) and then applies uncertainty analysis and graph convolutional networks for detection refinement. Specifically, we first aggregate global information from high-level features and harvest shadow details in low-level features for obtaining an initial prediction. Secondly, we analyze the uncertainty of our ESDNet for an input shadow image and then take its intensity, expectation, and entropy into account to formulate a semi-supervised graph learning problem. Finally, we solve this problem by training a graph convolution network to obtain the refined detection result for every training image. To evaluate our method, we conduct extensive experiments on several benchmark datasets, i.e., SBU, UCF, ISTD, and even on soft shadow scenes. Experimental results demonstrate that our strategy can improve shadow detection performance by suppressing the uncertainties of false positive and false negative regions, achieving state-of-the-art results. 相似文献
987.
LiDAR-based 3D object detection is important for autonomous driving scene perception, but point clouds produced by LiDAR are irregular and unstructured in nature, and cannot be adopted by the conventional Convolutional Neural Networks (CNN). Recently, Graph Convolutional Networks (GCN) has been proved as an ideal way to handle non-Euclidean structure data, as well as for point cloud processing. However, GCN involves massive computation for searching adjacent nodes, and the heavy computational cost limits its applications in processing large-scale LiDAR point cloud in autonomous driving. In this work, we adopt a frustum-based point cloud-image fusion scheme to reduce the amount of LiDAR point clouds, thus making the GCN-based large-scale LiDAR point clouds feature learning feasible. On this basis, we propose an efficient graph attentional network to accomplish the goal of 3D object detection in autonomous driving, which can learn features from raw LiDAR point cloud directly without any conversions. We evaluate the model on the public KITTI benchmark dataset, the 3D detection mAP is 63.72% on KITTI Cars, Pedestrian and Cyclists, and the inference speed achieves 7.9 fps on a single GPU, which is faster than other methods of the same type. 相似文献
988.
Deep neural network models with strong feature extraction capacity are prone to overfitting and fail to adapt quickly to new tasks with few samples. Gradient-based meta-learning approaches can minimize overfitting and adapt to new tasks fast, but they frequently use shallow neural networks with limited feature extraction capacity. We present a simple and effective approach called Meta-Transfer-Adjustment learning (MTA) in this paper, which enables deep neural networks with powerful feature extraction capabilities to be applied to few-shot scenarios while avoiding overfitting and gaining the capacity for quickly adapting to new tasks via training on numerous tasks. Our presented approach is classified into two major parts, the Feature Adjustment (FA) module, and the Task Adjustment (TA) module. The feature adjustment module (FA) helps the model to make better use of the deep network to improve feature extraction, while the task adjustment module (TA) is utilized for further improve the model’s fast response and generalization capabilities. The proposed model delivers good classification results on the benchmark small sample datasets MiniImageNet and Fewshot-CIFAR100, as proved experimentally. 相似文献
989.
We propose a general deep variational model (reduced version, full version as well as the extension) via a comprehensive fusion approach in this paper. It is able to realize various image tasks in a completely unsupervised way without learning from samples. Technically, it can properly incorporate the CNN based deep image prior (DIP) architecture into the classic variational image processing models. The minimization problem solving strategy is transformed from iteratively minimizing the sub-problem for each variable to automatically minimizing the loss function by learning the generator network parameters. The proposed deep variational (DV) model contributes to the high order image edition and applications such as image restoration, inpainting, decomposition and texture segmentation. Experiments conducted have demonstrated significant advantages of the proposed deep variational model in comparison with several powerful techniques including variational methods and deep learning approaches. 相似文献
990.
Jansi Rani Sella Veluswami 《计算机系统科学与工程》2022,43(3):1231-1239
Most of the internet users connect through wireless networks. Major part of internet traffic is carried by Transmission Control Protocol (TCP). It has some design constraints while operated across wireless networks. TCP is the traditional predominant protocol designed for wired networks. To control congestion in the network, TCP used acknowledgment to delivery of packets by the end host. In wired network, packet loss signals congestion in the network. But rather in wireless networks, loss is mainly because of the wireless characteristics such as fading, signal strength etc. When a packet travels across wired and wireless networks, TCP congestion control theory faces problem during handshake between them. This paper focuses on finding this misinterpretation of the losses using cross layer approach. This paper focuses on increasing bandwidth usage by improving TCP throughput in wireless environments using cross layer approach and hence named the proposed system as CRLTCP. TCP misinterprets wireless loss as congestion loss and unnecessarily reduces congestion window size. Using the signal strength and frame error rate, the type of loss is identified and accordingly the response of TCP is modified. The results show that there is a significant improvement in the throughput of proposed TCP upon which bandwidth usage is increased. 相似文献