首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   494篇
  免费   33篇
电工技术   5篇
综合类   1篇
化学工业   124篇
金属工艺   12篇
机械仪表   14篇
建筑科学   18篇
矿业工程   3篇
能源动力   18篇
轻工业   75篇
水利工程   7篇
石油天然气   3篇
无线电   33篇
一般工业技术   105篇
冶金工业   36篇
原子能技术   1篇
自动化技术   72篇
  2023年   5篇
  2022年   13篇
  2021年   43篇
  2020年   21篇
  2019年   14篇
  2018年   27篇
  2017年   26篇
  2016年   19篇
  2015年   13篇
  2014年   28篇
  2013年   27篇
  2012年   40篇
  2011年   35篇
  2010年   27篇
  2009年   34篇
  2008年   19篇
  2007年   18篇
  2006年   13篇
  2005年   12篇
  2004年   12篇
  2003年   8篇
  2002年   8篇
  2001年   3篇
  2000年   3篇
  1999年   7篇
  1998年   10篇
  1997年   7篇
  1996年   8篇
  1995年   7篇
  1994年   4篇
  1993年   1篇
  1992年   3篇
  1991年   4篇
  1990年   1篇
  1989年   2篇
  1986年   1篇
  1982年   1篇
  1979年   1篇
  1977年   1篇
  1974年   1篇
排序方式: 共有527条查询结果,搜索用时 15 毫秒
111.
112.
Solving vector consensus with a wormhole   总被引:1,自引:0,他引:1  
This paper presents a solution to the vector consensus problem for Byzantine asynchronous systems augmented with wormholes. Wormholes prefigure a hybrid distributed system model, embodying the notion of an enhanced part of the system with "good" properties otherwise not guaranteed by the "normal" weak environment. A protocol built for this type of system runs in the asynchronous part, where f out of n/spl ges/3f+1 processes might be corrupted by malicious adversaries. However, sporadically, processes can rely on the services provided by the wormhole for the correct execution of simple operations. One of the nice features of this setting is that it is possible to keep the protocol completely time-free and, in addition, to circumvent the FLP impossibility result by hiding all time-related assumptions in the wormhole. Furthermore, from a performance perspective, it leads to the design of a protocol with a good time complexity.  相似文献   
113.
An adaptive boundary element scheme is developed using the concept of local reanalysis and h-hierarchical functions for the construction of near-optimal computational models. The use of local reanalysis in the error estimation guarantees the reliability of the modelling process while the use of quadratic and quartic h-hieararchical elements guarantees the efficiency of the adaptive algorithm. The technique is developed for the elastic analysis of two-dimensional models. Numerical examples show the rapid convergence of the results with a few refinement steps.  相似文献   
114.
The goal of this research is to provide systems support that allows fine grain, data parallel code to execute efficiently on much coarser grain multiprocessors. The task of writing parallel applications is simplified by allowing the programmer to assume a number of processors convenient to the algorithm being implemented. This paper describes and evaluates a runtime approach that efficiently manages thousands of virtual processors per actual processor. The limits in using user-level threads as fine grain virtual processors are identified. Key techniques used are tight integration and specialization of scheduling, communication, optimized context switching, and fine-tuned stack management. A prototype of this runtime approach is evaluated by comparing implementations of three problems, a smoothing kernel of a thin-layer Navier–Stokes code, a five point stencil problem, and a block bordered system of linear equations on an Intel Paragon multiprocessor and on a network of DEC Alpha workstations. The additional cost relative to an efficient manually contracted code can be as low as 15% for granularities of 50 floating point operations per virtual processor and is typically 5–20% for granularities of about 100 floating point operations per virtual processor. The overhead is analyzed in detail to show the costs of scheduling, communication, context switching, reduced memory performance, and insuring data consistency. The implementation and analysis indicate that fine grain code can be efficiently executed on a coarse grain multiprocessor using very lightweight, specialized threads.  相似文献   
115.
This paper presents the design and prototype implementation of the SELFNET fifth-generation (5G) mobile edge infrastructure. In line with the current and emerging 5G architectural principles, visions, and standards, the proposed infrastructure is established primarily based on a mobile edge computing paradigm. It leverages cloud computing, software-defined networking, and network function virtualization as core enabling technologies. Several technical solutions and options have been analyzed. As a result, a novel portable 5G infrastructure testbed has been prototyped to enable the preliminary testing of the integrated key technologies and to provide a realistic execution platform for further investigating and evaluating software-defined networking– and network function virtualization–based application scenarios in 5G networks.  相似文献   
116.
Recent research efforts have shown that wireless networks can benefit from network coding (NC) technology in terms of bandwidth, robustness to packet losses, delay and energy consumption. However, NC-enabled wireless networks are susceptible to a severe security threat, known as data pollution attack, where a malicious node injects into the network polluted packets that prevent the destination nodes from decoding correctly. Due to recoding, occurred at the intermediate nodes, according to the core principle of NC, the polluted packets propagate quickly into other packets and corrupt bunches of legitimate packets leading to network resource waste. Hence, a lot of research effort has been devoted to schemes against data pollution attacks. Homomorphic MAC-based schemes are a promising solution against data pollution attacks. However, most of them are susceptible to a new type of pollution attack, called tag pollution attack, where an adversary node randomly modifies tags appended to the end of the transmitted packets. Therefore, in this paper, we propose an efficient homomorphic message authentication code-based scheme, called HMAC, providing resistance against data pollution attacks and tag pollution attacks in NC-enabled wireless networks. Our proposed scheme makes use of three types of homomorphic tags (i.e., MACs, D-MACs and one signature) which are appended to the end of the coded packet. Our results show that the proposed HMAC scheme is more efficient compared to other competitive tag pollution immune schemes in terms of complexity, communication overhead and key storage overhead.  相似文献   
117.
118.
Murine models of osteoarthritis (OA) are increasingly important for understating pathogenesis and for testing new therapeutic approaches. Their translational potential is, however, limited by the reduced size of mouse limbs which requires a much higher resolution to evaluate their articular cartilage compared to clinical imaging tools. In experimental models, this tissue has been predominantly assessed by time‐consuming histopathology using standardized semi‐quantitative scoring systems. This study aimed to develop a novel imaging method for 3‐dimensional (3D) histology of mouse articular cartilage, using a robotic system—termed here “3D histocutter”—which automatically sections tissue samples and serially acquires fluorescence microscopy images of each section. Tibiae dissected from C57Bl/6 mice, either naïve or OA‐induced by surgical destabilization of the medial meniscus (DMM), were imaged using the 3D histocutter by exploiting tissue autofluorescence. Accuracy of 3D imaging was validated by ex vivo contrast‐enhanced micro‐CT and sensitivity to lesion detection compared with conventional histology. Reconstructions of tibiae obtained from 3D histocutter serial sections showed an excellent agreement with contrast‐enhanced micro‐CT reconstructions. Furthermore, osteoarthritic features, including articular cartilage loss and osteophytes, were also visualized. An in‐house developed software allowed to automatically evaluate articular cartilage morphology, eliminating the subjectivity associated to semi‐quantitative scoring and considerably increasing analysis throughput. The novelty of this methodology is, not only the increased throughput in imaging and evaluating mouse articular cartilage morphology starting from conventionally embedded samples, but also the ability to add the third dimension to conventional histomorphometry which might be useful to improve disease assessment in the model.  相似文献   
119.
120.
It is well known that the absorption coefficient of diamond in the two-phonon region is constant, for example at 2000 cm− 1, the absorption coefficient is 12.3 cm− 1. This means that the infrared absorbance in the two-phonon region is proportional to the thickness of the samples, which is generally used as standard to normalize the infrared absorption spectra of diamond samples according to their thickness. This is true for natural and HPHT synthetic single crystal diamond. However for polycrystalline or nanocrystalline CVD diamond films, we found that the situation may be different. For high quality thick CVD diamond films of thickness > 150 μm, the infrared absorbance in the two-phonon region is proportional to its thickness. While CVD diamond films of equal thickness but of different quality show variable absorbance in the two-phonon absorption region in terms of thickness. Our investigation on this observation primarily indicates that the grain size of CVD diamond films has influence on the two-phonon absorption. In this work, we present this new result and discuss the mechanism of this phenomenon in the light of the growth mechanism of CVD diamond.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号