首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8005篇
  免费   374篇
  国内免费   53篇
电工技术   220篇
综合类   45篇
化学工业   1715篇
金属工艺   180篇
机械仪表   197篇
建筑科学   257篇
矿业工程   13篇
能源动力   533篇
轻工业   794篇
水利工程   98篇
石油天然气   163篇
武器工业   5篇
无线电   980篇
一般工业技术   1402篇
冶金工业   476篇
原子能技术   81篇
自动化技术   1273篇
  2024年   18篇
  2023年   187篇
  2022年   431篇
  2021年   539篇
  2020年   379篇
  2019年   385篇
  2018年   496篇
  2017年   348篇
  2016年   387篇
  2015年   253篇
  2014年   367篇
  2013年   619篇
  2012年   411篇
  2011年   458篇
  2010年   292篇
  2009年   257篇
  2008年   237篇
  2007年   217篇
  2006年   183篇
  2005年   161篇
  2004年   129篇
  2003年   111篇
  2002年   128篇
  2001年   67篇
  2000年   71篇
  1999年   86篇
  1998年   135篇
  1997年   114篇
  1996年   82篇
  1995年   88篇
  1994年   60篇
  1993年   54篇
  1992年   43篇
  1991年   25篇
  1990年   29篇
  1989年   44篇
  1988年   48篇
  1987年   30篇
  1986年   29篇
  1985年   46篇
  1984年   51篇
  1983年   44篇
  1982年   27篇
  1981年   21篇
  1980年   30篇
  1979年   24篇
  1978年   20篇
  1977年   23篇
  1976年   31篇
  1974年   18篇
排序方式: 共有8432条查询结果,搜索用时 15 毫秒
41.
Load balancing is a crucial factor in IPTV delivery networks. Load balancing aims at utilizing the resources efficiently, maximizing the throughput, and minimizing the request rejection rate. The peer-service area is the recent architecture for IPTV delivery networks that overcomes the flaws of the previous architectures. However, it still suffers from the load imbalance problem. This paper investigates the load imbalance problem, and tries to augment the peer-service area architecture to overcome this problem. To achieve the load balancing over the proposed architecture, we suggest a new load-balancing algorithm that considers both the expected and the current load of both contents and servers. The proposed load-balancing algorithm consists of two stages. The first stage is the contents replication according to their expected load, while the second stage is the content-aware request distribution. To test the effectiveness of the proposed algorithm, we have compared it with both the traditional Round Robin algorithm and Cho algorithm. The experimental results depict that the proposed algorithm outperforms the two other algorithms in terms of load balance, throughput, and request rejection rate.  相似文献   
42.
The Internet Archive’s (IA) Wayback Machine is the largest and oldest public Web archive and has become a significant repository of our recent history and cultural heritage. Despite its importance, there has been little research about how it is discovered and used. Based on Web access logs, we analyze what users are looking for, why they come to IA, where they come from, and how pages link to IA. We find that users request English pages the most, followed by the European languages. Most human users come to Web archives because they do not find the requested pages on the live Web. About 65 % of the requested archived pages no longer exist on the live Web. We find that more than 82 % of human sessions connect to the Wayback Machine via referrals from other Web sites, while only 15 % of robots have referrers. Most of the links (86 %) from Websites are to individual archived pages at specific points in time, and of those 83 % no longer exist on the live Web. Finally, we find that users who come from search engines browse more pages than users who come from external Web sites.  相似文献   
43.
This work presents four mathematical remarks concluded from the mathematical analysis for the interrelationships between the dependent and independent variables that control the measures: perimeter, floor area, walls surface area and total surface area in the regular forms that have a given volume. Such forms include prismatic and pyramidal forms. The work consists of four parts, of which this first part presents the remarks of the isosceles triangular right prism. The first remark examines the effect of θ, the angle of the triangular base, on the total surface area. The second remark calculates the minimum total surface area in two cases, depending on whether angle θ is constant or variable. The third remark calculates the walls ratio and the critical walls ratio. The last remark studies the required conditions for the numerical equality in two cases, where the perimeter is equal to the area, and where the total surface area is equal to the volume.  相似文献   
44.
Indium tin oxide‐coated thin films (200 nm) are deposited on glass substrates by using R.f. sputtering technique. Here, we investigate the influence of new technique of treatment, which is called as “oil thermal annealing” on the nano‐structured indium tin oxide thin films at fixed temperature (150 °C) which improves adhesion strength, electrical conductivity and optical properties (transmittance) of the films. Oil thermal annealing is used to reduce inherent defects that may be introduced during the prepared thin film and cooling processes. Proposed technique is highly suitable for liquid crystal displays, solar cells and organic light emitting diodes, and many other display‐related applications.  相似文献   
45.
The stoichiometric association constants, K, the thermodynamic association constant, KA, and the other thermodynamic parameters such as ΔS°, ΔH° and ΔG° for the association between each of the Ca and Mg ions with benzoate, o-toloate, o-chlorobenzoate and salycylate have been determined at 25°C, 35°C and 45°C in aqueous media. Ion-selective electrode technique has been used in the measurements of Ca and Mg ion activitiesThe trend of association behavior of both Ca and Mg aromatic salts could not be explained on the basis of pKa of the mother organic acids but could be explained based on the trend of Hammet function σ of these salts themselves relative to the corresponding benzoate salt.  相似文献   
46.
Multimedia Tools and Applications - Nowadays, web users frequently explore multimedia contents to satisfy their information needs. The exploration approaches usually provide linear interaction...  相似文献   
47.

Security threats are crucial challenges that deter Mixed reality (MR) communication in medical telepresence. This research aims to improve the security by reducing the chances of types of various attacks occurring during the real-time data transmission in surgical telepresence as well as reduce the time of the cryptographic algorithm and keep the quality of the media used. The proposed model consists of an enhanced RC6 algorithm in combination. Dynamic keys are generated from the RC6 algorithm mixed with RC4 to create dynamic S-box and permutation table, preventing various known attacks during the real-time data transmission. For every next session, a new key is created, avoiding possible reuse of the same key from the attacker. The results obtained from our proposed system are showing better performance compared to the state of art. The resistance to the tested attacks is measured throughout the entropy, Pick to Signal Noise Ratio (PSNR) is decreased for the encrypted image than the state of art, structural similarity index (SSIM) closer to zero. The execution time of the algorithm is decreased for an average of 20%. The proposed system is focusing on preventing the brute force attack occurred during the surgical telepresence data transmission. The paper proposes a framework that enhances the security related to data transmission during surgeries with acceptable performance.

  相似文献   
48.

Most schemes exhibit low robustness due to LSB’s (Least Significant Bit) and MSB’s (Most Significant Bit) based information hiding in the cover image. However, most of these IW schemes have low imperceptibility as the cover image distortion reveals to the attacker due to information hiding in MSB’s. In this paper, a hybrid image watermarking scheme is proposed based on integrating Robust Principal Component Analysis (R-PCA), Discrete Tchebichef Transform (DTT), and Singular Value Decomposition (SVD). A grayscale watermark image is twisted/scrambled using a 2D Discrete Hyper-chaotic Encryption System (2D-DHCES) to boost up the robustness/heftiness and security. The original cover image is crumbled into sparse components using R-PCA and using DTT the substantial component is additionally decomposed and the watermark will be embedded in the cover image using SVD processing. In DTT, scarcer coefficients hold the utmost energy, also provide an optimum sparse depiction of the substantial image edges and features that supports proficient retrieval of the watermark image even after unadorned image distortion based channel attacks. The imperceptibility and robustness of the proposed method are corroborated against a variety of signal processing channel attacks (salt and pepper noise, multi-directional shearing, cropping, and frequency filtering, etc.). The visual and quantifiable outcomes reveal that the proposed image watermarking scheme is much effective and delivers high forbearance against several image processing and geometric attacks.

  相似文献   
49.
50.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号