首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   90003篇
  免费   1142篇
  国内免费   432篇
电工技术   865篇
综合类   2318篇
化学工业   12721篇
金属工艺   5035篇
机械仪表   3192篇
建筑科学   2253篇
矿业工程   569篇
能源动力   1404篇
轻工业   3828篇
水利工程   1312篇
石油天然气   355篇
无线电   10145篇
一般工业技术   17626篇
冶金工业   3457篇
原子能技术   326篇
自动化技术   26171篇
  2023年   67篇
  2022年   141篇
  2021年   172篇
  2020年   154篇
  2019年   146篇
  2018年   14585篇
  2017年   13496篇
  2016年   10111篇
  2015年   700篇
  2014年   419篇
  2013年   578篇
  2012年   3337篇
  2011年   9658篇
  2010年   8466篇
  2009年   5782篇
  2008年   6976篇
  2007年   7937篇
  2006年   315篇
  2005年   1350篇
  2004年   1246篇
  2003年   1292篇
  2002年   658篇
  2001年   219篇
  2000年   303篇
  1999年   166篇
  1998年   313篇
  1997年   208篇
  1996年   173篇
  1995年   138篇
  1994年   115篇
  1993年   141篇
  1992年   119篇
  1991年   119篇
  1990年   84篇
  1989年   75篇
  1988年   81篇
  1987年   75篇
  1986年   88篇
  1985年   106篇
  1984年   99篇
  1983年   95篇
  1982年   78篇
  1981年   63篇
  1978年   49篇
  1977年   62篇
  1976年   82篇
  1968年   60篇
  1966年   49篇
  1955年   64篇
  1954年   69篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
971.
Two approximations, center-beam approximation and reference digital elevation model (DEM) approximation, are used in synthetic aperture radar (SAR) motion compensation procedures. They usually introduce residual motion compensation errors for airborne single-antenna SAR imaging and SAR interferometry. In this paper, we investigate the effects of residual uncompensated motion errors, which are caused by the above two approximations, on the performance of airborne along-track interferometric SAR (ATI-SAR). The residual uncompensated errors caused by center-beam approximation in the absence and in the presence of elevation errors are derived, respectively. Airborne simulation parameters are used to verify the correctness of the analysis and to show the impacts of residual uncompensated errors on the interferometric phase errors for ATI-SAR. It is shown that the interferometric phase errors caused by the center-beam approximation with an accurate DEM could be neglected, while the interferometric phase errors caused by the center-beam approximation with an inaccurate DEM cannot be neglected when the elevation errors exceed a threshold. This research provides theoretical bases for the error source analysis and signal processing of airborne ATI-SAR.  相似文献   
972.
Some neurons in the brain of freely moving rodents show special firing pattern. The firing of head direction cells (HDCs) and grid cells (GCs) is related to the moving direction and distance, respectively. Thus, it is considered that these cells play an important role in the rodents’ path integration. To provide a bionic approach for the vehicle to achieve path integration, we present a biologically inspired model of path integration based on the firing characteristics of HDCs and GCs. The detailed implementation process of this model is discussed. Besides, the proposed model is realized by simulation, and the path integration performance is analyzed under different conditions. Simulations validate that the proposed model is effective and stable.  相似文献   
973.
Identity-based signature has become an important technique for lightweight authentication as soon as it was proposed in 1984. Thereafter, identity-based signature schemes based on the integer factorization problem and discrete logarithm problem were proposed one after another. Nevertheless, the rapid development of quantum computers makes them insecure. Recently, many efforts have been made to construct identity-based signatures over lattice assumptions against attacks in the quantum era. However, their efficiency is not very satisfactory. In this study, an efficient identity-based signature scheme is presented over the number theory research unit (NTRU) lattice assumption. The new scheme is more efficient than other lattice- and identity-based signature schemes. The new scheme proves to be unforgeable against the adaptively chosen message attack in the random oracle model under the hardness of the γ-shortest vector problem on the NTRU lattice.  相似文献   
974.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   
975.
Gradient vector flow (GVF) is a feature-preserving spatial diffusion of image gradients. It was introduced to overcome the limited capture range in traditional active contour segmentation. However, the original iterative solver for GVF, using Euler’s method, converges very slowly. Thus, many iterations are needed to achieve the desired capture range. Several groups have investigated the use of graphic processing units (GPUs) to accelerate the GVF computation. Still, this does not reduce the number of iterations needed. Multigrid methods, on the other hand, have been shown to provide a much better capture range using considerable less iterations. However, non-GPU implementations of the multigrid method are not as fast as the Euler method when executed on the GPU. In this paper, a novel GPU implementation of a multigrid solver for GVF written in OpenCL is presented. The results show that this implementation converges and provides a better capture range about 2–5 times faster than the conventional iterative GVF solver on the GPU.  相似文献   
976.
Multiview video coding (MVC) exploits mode decision, motion estimation and disparity estimation to achieve high compression ratio, which results in an extensive computational complexity. This paper presents an efficient mode decision approach for MVC using a macroblock (MB) position constraint model (MPCM). The proposed approach reduces the number of candidate modes by utilizing the mode correlation and rate distortion cost (RD cost) in the previously encoded frames/views. Specifically, the mode correlations both in the temporal-spatial domain and the inter-view are modeled with MPCM. Then, MPCM is exploited to select the optimal prediction direction for the current encoding MB. Finally, the inter mode is early determined in the optimal prediction direction. Experimental results show that the proposed method can save 86.03 % of encoding time compared with the exhaustive mode decision used in the reference software of joint multiview video coding, with only 0.077 dB loss in Bjontegaard delta peak signal-to-noise ratio (BDPSNR) and 2.29 % increment of the total Bjontegaard delta bit rate (BDBR), which is superior to the performances of state-of-the-art approaches.  相似文献   
977.
We present a preliminary study of buffer overflow vulnerabilities in CUDA software running on GPUs. We show how an attacker can overrun a buffer to corrupt sensitive data or steer the execution flow by overwriting function pointers, e.g., manipulating the virtual table of a C++ object. In view of a potential mass market diffusion of GPU accelerated software this may be a major concern.  相似文献   
978.
Statistical detection of mass malware has been shown to be highly successful. However, this type of malware is less interesting to cyber security officers of larger organizations, who are more concerned with detecting malware indicative of a targeted attack. Here we investigate the potential of statistically based approaches to detect such malware using a malware family associated with a large number of targeted network intrusions. Our approach is complementary to the bulk of statistical based malware classifiers, which are typically based on measures of overall similarity between executable files. One problem with this approach is that a malicious executable that shares some, but limited, functionality with known malware is likely to be misclassified as benign. Here a new approach to malware classification is introduced that classifies programs based on their similarity with known malware subroutines. It is illustrated that malware and benign programs can share a substantial amount of code, implying that classification should be based on malicious subroutines that occur infrequently, or not at all in benign programs. Various approaches to accomplishing this task are investigated, and a particularly simple approach appears the most effective. This approach simply computes the fraction of subroutines of a program that are similar to malware subroutines whose likes have not been found in a larger benign set. If this fraction exceeds around 1.5 %, the corresponding program can be classified as malicious at a 1 in 1000 false alarm rate. It is further shown that combining a local and overall similarity based approach can lead to considerably better prediction due to the relatively low correlation of their predictions.  相似文献   
979.
The wide availability of affordable RGB-D sensors changes the landscape of indoor scene analysis. Years of research on simultaneous localization and mapping (SLAM) have made it possible to merge multiple RGB-D images into a single point cloud and provide a 3D model for a complete indoor scene. However, these reconstructed models only have geometry information, not including semantic knowledge. The advancements in robot autonomy and capabilities for carrying out more complex tasks in unstructured environments can be greatly enhanced by endowing environment models with semantic knowledge. Towards this goal, we propose a novel approach to generate 3D semantic maps for an indoor scene. Our approach creates a 3D reconstructed map from a RGB-D image sequence firstly, then we jointly infer the semantic object category and structural class for each point of the global map. 12 object categories (e.g. walls, tables, chairs) and 4 structural classes (ground, structure, furniture and props) are labeled in the global map. In this way, we can totally understand both the object and structure information. In order to get semantic information, we compute semantic segmentation for each RGB-D image and merge the labeling results by a Dense Conditional Random Field. Different from previous techniques, we use temporal information and higher-order cliques to enforce the label consistency for each image labeling result. Our experiments demonstrate that temporal information and higher-order cliques are significant for the semantic mapping procedure and can improve the precision of the semantic mapping results.  相似文献   
980.
Range of applications for Wireless Sensor Networks (WSNs) is increasing rapidly. One class of such applications is Energy-Aware Wireless Positioning Systems for situation awareness. Localization deals with determining a target node’s position in WSN by analyzing signals exchanged between nodes. Received Signal Strength Indicator (RSSI) represents the ratio between received signal power and a reference power, and is typically used to estimate distances between nodes. RSSI distance estimations are affected by many factors. This paper aims to enhance the accuracy of RSSI-based localization techniques in ZigBee Networks through studying the communication channel status between two nodes. As the network nodes are exposed to high noise levels, position estimation accuracy deteriorates. A novel adaptive localization scheme is proposed; Two-State Markov model with moving average is employed to detect unpredictable RSSI readings that may reflect badly on the estimation. The proposed scheme achieves better estimation accuracy, for example, the estimation error was reduced from 11.7 m to just 3 m using the proposed scheme.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号