首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   91733篇
  免费   1375篇
  国内免费   416篇
电工技术   881篇
综合类   2351篇
化学工业   13540篇
金属工艺   4967篇
机械仪表   3192篇
建筑科学   2731篇
矿业工程   582篇
能源动力   1368篇
轻工业   4373篇
水利工程   1342篇
石油天然气   381篇
武器工业   1篇
无线电   9839篇
一般工业技术   17602篇
冶金工业   3159篇
原子能技术   311篇
自动化技术   26904篇
  2023年   85篇
  2022年   202篇
  2021年   333篇
  2020年   186篇
  2019年   203篇
  2018年   14624篇
  2017年   13529篇
  2016年   10204篇
  2015年   828篇
  2014年   551篇
  2013年   800篇
  2012年   3592篇
  2011年   9945篇
  2010年   8639篇
  2009年   5980篇
  2008年   7173篇
  2007年   8123篇
  2006年   445篇
  2005年   1487篇
  2004年   1385篇
  2003年   1417篇
  2002年   719篇
  2001年   229篇
  2000年   310篇
  1999年   168篇
  1998年   177篇
  1997年   154篇
  1996年   139篇
  1995年   87篇
  1994年   89篇
  1993年   91篇
  1992年   69篇
  1991年   82篇
  1990年   61篇
  1989年   68篇
  1988年   58篇
  1987年   47篇
  1986年   50篇
  1985年   53篇
  1984年   62篇
  1983年   44篇
  1980年   39篇
  1976年   39篇
  1975年   40篇
  1968年   49篇
  1966年   47篇
  1965年   44篇
  1958年   38篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
This paper deals with dense optical flow estimation from the perspective of the trade-off between quality of the estimated flow and computational cost which is required by real-world applications. We propose a fast and robust local method, denoted by eFOLKI, and describe its implementation on GPU. It leads to very high performance even on large image formats such as 4 K (3,840 × 2,160) resolution. In order to assess the interest of eFOLKI, we first present a comparative study with currently available GPU codes, including local and global methods, on a large set of data with ground truth. eFOLKI appears significantly faster while providing quite accurate and highly robust estimated flows. We then show, on four real-time video processing applications based on optical flow, that eFOLKI reaches the requirements both in terms of estimated flows quality and of processing rate.  相似文献   
992.
Two approximations, center-beam approximation and reference digital elevation model (DEM) approximation, are used in synthetic aperture radar (SAR) motion compensation procedures. They usually introduce residual motion compensation errors for airborne single-antenna SAR imaging and SAR interferometry. In this paper, we investigate the effects of residual uncompensated motion errors, which are caused by the above two approximations, on the performance of airborne along-track interferometric SAR (ATI-SAR). The residual uncompensated errors caused by center-beam approximation in the absence and in the presence of elevation errors are derived, respectively. Airborne simulation parameters are used to verify the correctness of the analysis and to show the impacts of residual uncompensated errors on the interferometric phase errors for ATI-SAR. It is shown that the interferometric phase errors caused by the center-beam approximation with an accurate DEM could be neglected, while the interferometric phase errors caused by the center-beam approximation with an inaccurate DEM cannot be neglected when the elevation errors exceed a threshold. This research provides theoretical bases for the error source analysis and signal processing of airborne ATI-SAR.  相似文献   
993.
Some neurons in the brain of freely moving rodents show special firing pattern. The firing of head direction cells (HDCs) and grid cells (GCs) is related to the moving direction and distance, respectively. Thus, it is considered that these cells play an important role in the rodents’ path integration. To provide a bionic approach for the vehicle to achieve path integration, we present a biologically inspired model of path integration based on the firing characteristics of HDCs and GCs. The detailed implementation process of this model is discussed. Besides, the proposed model is realized by simulation, and the path integration performance is analyzed under different conditions. Simulations validate that the proposed model is effective and stable.  相似文献   
994.
Identity-based signature has become an important technique for lightweight authentication as soon as it was proposed in 1984. Thereafter, identity-based signature schemes based on the integer factorization problem and discrete logarithm problem were proposed one after another. Nevertheless, the rapid development of quantum computers makes them insecure. Recently, many efforts have been made to construct identity-based signatures over lattice assumptions against attacks in the quantum era. However, their efficiency is not very satisfactory. In this study, an efficient identity-based signature scheme is presented over the number theory research unit (NTRU) lattice assumption. The new scheme is more efficient than other lattice- and identity-based signature schemes. The new scheme proves to be unforgeable against the adaptively chosen message attack in the random oracle model under the hardness of the γ-shortest vector problem on the NTRU lattice.  相似文献   
995.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   
996.
Gradient vector flow (GVF) is a feature-preserving spatial diffusion of image gradients. It was introduced to overcome the limited capture range in traditional active contour segmentation. However, the original iterative solver for GVF, using Euler’s method, converges very slowly. Thus, many iterations are needed to achieve the desired capture range. Several groups have investigated the use of graphic processing units (GPUs) to accelerate the GVF computation. Still, this does not reduce the number of iterations needed. Multigrid methods, on the other hand, have been shown to provide a much better capture range using considerable less iterations. However, non-GPU implementations of the multigrid method are not as fast as the Euler method when executed on the GPU. In this paper, a novel GPU implementation of a multigrid solver for GVF written in OpenCL is presented. The results show that this implementation converges and provides a better capture range about 2–5 times faster than the conventional iterative GVF solver on the GPU.  相似文献   
997.
Multiview video coding (MVC) exploits mode decision, motion estimation and disparity estimation to achieve high compression ratio, which results in an extensive computational complexity. This paper presents an efficient mode decision approach for MVC using a macroblock (MB) position constraint model (MPCM). The proposed approach reduces the number of candidate modes by utilizing the mode correlation and rate distortion cost (RD cost) in the previously encoded frames/views. Specifically, the mode correlations both in the temporal-spatial domain and the inter-view are modeled with MPCM. Then, MPCM is exploited to select the optimal prediction direction for the current encoding MB. Finally, the inter mode is early determined in the optimal prediction direction. Experimental results show that the proposed method can save 86.03 % of encoding time compared with the exhaustive mode decision used in the reference software of joint multiview video coding, with only 0.077 dB loss in Bjontegaard delta peak signal-to-noise ratio (BDPSNR) and 2.29 % increment of the total Bjontegaard delta bit rate (BDBR), which is superior to the performances of state-of-the-art approaches.  相似文献   
998.
We present a preliminary study of buffer overflow vulnerabilities in CUDA software running on GPUs. We show how an attacker can overrun a buffer to corrupt sensitive data or steer the execution flow by overwriting function pointers, e.g., manipulating the virtual table of a C++ object. In view of a potential mass market diffusion of GPU accelerated software this may be a major concern.  相似文献   
999.
Statistical detection of mass malware has been shown to be highly successful. However, this type of malware is less interesting to cyber security officers of larger organizations, who are more concerned with detecting malware indicative of a targeted attack. Here we investigate the potential of statistically based approaches to detect such malware using a malware family associated with a large number of targeted network intrusions. Our approach is complementary to the bulk of statistical based malware classifiers, which are typically based on measures of overall similarity between executable files. One problem with this approach is that a malicious executable that shares some, but limited, functionality with known malware is likely to be misclassified as benign. Here a new approach to malware classification is introduced that classifies programs based on their similarity with known malware subroutines. It is illustrated that malware and benign programs can share a substantial amount of code, implying that classification should be based on malicious subroutines that occur infrequently, or not at all in benign programs. Various approaches to accomplishing this task are investigated, and a particularly simple approach appears the most effective. This approach simply computes the fraction of subroutines of a program that are similar to malware subroutines whose likes have not been found in a larger benign set. If this fraction exceeds around 1.5 %, the corresponding program can be classified as malicious at a 1 in 1000 false alarm rate. It is further shown that combining a local and overall similarity based approach can lead to considerably better prediction due to the relatively low correlation of their predictions.  相似文献   
1000.
The wide availability of affordable RGB-D sensors changes the landscape of indoor scene analysis. Years of research on simultaneous localization and mapping (SLAM) have made it possible to merge multiple RGB-D images into a single point cloud and provide a 3D model for a complete indoor scene. However, these reconstructed models only have geometry information, not including semantic knowledge. The advancements in robot autonomy and capabilities for carrying out more complex tasks in unstructured environments can be greatly enhanced by endowing environment models with semantic knowledge. Towards this goal, we propose a novel approach to generate 3D semantic maps for an indoor scene. Our approach creates a 3D reconstructed map from a RGB-D image sequence firstly, then we jointly infer the semantic object category and structural class for each point of the global map. 12 object categories (e.g. walls, tables, chairs) and 4 structural classes (ground, structure, furniture and props) are labeled in the global map. In this way, we can totally understand both the object and structure information. In order to get semantic information, we compute semantic segmentation for each RGB-D image and merge the labeling results by a Dense Conditional Random Field. Different from previous techniques, we use temporal information and higher-order cliques to enforce the label consistency for each image labeling result. Our experiments demonstrate that temporal information and higher-order cliques are significant for the semantic mapping procedure and can improve the precision of the semantic mapping results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号