首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   694篇
  免费   70篇
  国内免费   31篇
电工技术   41篇
综合类   77篇
化学工业   21篇
金属工艺   14篇
机械仪表   42篇
建筑科学   35篇
矿业工程   14篇
能源动力   10篇
轻工业   22篇
水利工程   11篇
石油天然气   23篇
武器工业   8篇
无线电   132篇
一般工业技术   78篇
冶金工业   9篇
原子能技术   6篇
自动化技术   252篇
  2024年   1篇
  2023年   10篇
  2022年   16篇
  2021年   13篇
  2020年   19篇
  2019年   20篇
  2018年   14篇
  2017年   30篇
  2016年   21篇
  2015年   17篇
  2014年   40篇
  2013年   54篇
  2012年   43篇
  2011年   53篇
  2010年   35篇
  2009年   57篇
  2008年   38篇
  2007年   47篇
  2006年   33篇
  2005年   41篇
  2004年   32篇
  2003年   22篇
  2002年   17篇
  2001年   23篇
  2000年   15篇
  1999年   7篇
  1998年   8篇
  1997年   12篇
  1996年   6篇
  1995年   8篇
  1994年   9篇
  1993年   4篇
  1992年   5篇
  1991年   1篇
  1990年   4篇
  1989年   3篇
  1988年   3篇
  1987年   2篇
  1986年   1篇
  1985年   3篇
  1984年   1篇
  1983年   2篇
  1982年   1篇
  1981年   3篇
  1980年   1篇
排序方式: 共有795条查询结果,搜索用时 31 毫秒
1.
We explore a truncation error criterion to steer adaptive step length refinement and coarsening in incremental-iterative path following procedures, applied to problems in large-deformation structural mechanics. Elaborating on ideas proposed by Bergan and collaborators in the 1970s, we first describe an easily computable scalar stiffness parameter whose sign and rate of change provide reliable information on the local behavior and complexity of the equilibrium path. We then derive a simple scaling law that adaptively adjusts the length of the next step based on the rate of change of the stiffness parameter at previous points on the path. We show that this scaling is equivalent to keeping a local truncation error constant in each step. We demonstrate with numerical examples that our adaptive method follows a path with a significantly reduced number of points compared to an analysis with uniform step length of the same fidelity level. A comparison with Abaqus illustrates that the truncation error criterion effectively concentrates points around the smallest-scale features of the path, which is generally not possible with automatic incrementation solely based on local convergence properties.  相似文献   
2.
The smooth fractionator   总被引:8,自引:0,他引:8  
A modification of the general fractionator sampling technique called the smooth fractionator is presented. It may be used in almost every situation in which sampling is performed from distinct items that are uniquely defined, often they are physically separated items or clusters like pieces, blocks, slabs, sections, etc. To each item is associated a ‘guesstimate’ or an associated variable with a more‐or‐less close – and possibly biased ? relationship to the content of the item. The smooth fractionator is systematic sampling among the items arranged according to the guesstimates in a unique, symmetric sequence with one peak and minimal jumps. The smooth fractionator is both very simple to implement and so efficient that it should probably always be used unless the natural sequence of the sampling items is equally smooth. So far, there is no theory for the prediction of the efficiency of smooth fractionator designs in general, and their properties are therefore illustrated with a range of real and simulated examples. At the cost of a slightly more elaborate sampling scheme, it is, however, always possible to obtain an unbiased estimate of the real precision and of some of the variance components. The only real practical problem for always obtaining a high precision with the smooth fractionator is specimen inhomogeneity, but that is detectable at almost no extra cost. With careful designs and for sample sizes of about 10, the sampling variation for the primary, smooth fractionator sampling step may in practice often be small enough to be ignored.  相似文献   
3.
提出了利用遥测的过载、俯仰角信息和光测的坐标联合计算弹道关机段速度的方法,从遥测结果的系统误差、随机误差和数值积分的截断误差三万面对剩余误差的影响进行了精度分析,并给出了速度精度的算法。  相似文献   
4.
ROI-based Watermarking Scheme for JPEG 2000   总被引:1,自引:0,他引:1  
A new region of interest (ROI)-based watermarking method for JPEG 2000 is presented. The watermark is embedded into the host image based on the characteristics of the ROI to protect rights to the images. This scheme integrates the watermarking process with JPEG 2000 compression procedures. Experimental results have demonstrated that the proposed watermark technique successfully survives JPEG 2000 compression, progressive transmission, and principal attacks.  相似文献   
5.
As one of the famous block-based image coding schemes,block truncation coding(BTC) has been also applied in digital watermarking.Previous BTC-based watermarking or hiding schemes usually embed secret data by modifying the BTC encoding stage or BTC-compressed data,obtaining the watermarked image with poorer quality than the BTC-compressed version.This paper presents a new oblivious image watermarking scheme by exploiting BTC bitmaps.Unlike the traditional schemes,our approach does not really perform the BTC compression on images during the embedding process but utilizes the parity of the number of horizontal edge transitions in each BTC bitmap to guide the watermark embedding and extraction processes.The embedding process starts by partitioning the original cover image into non-overlapping 4×4 blocks and performing BTC on each block to obtain its BTC bitmap.One watermark bit is embedded in each block by modifying at most three pixel values in the block to make sure that the parity of the number of horizontal edge transitions in the bitmap of the modified block is equal to the embedded watermark bit.In the extraction stage,the suspicious image is first partitioned into non-overlapping 4×4 blocks and BTC is performed on each block to obtain its bitmap.Then,by checking the parity of the number of horizontal edge transitions in the bitmap,we can extract one watermark bit in each block.Experimental results demonstrate that the proposed watermarking scheme is fragile to various image processing operations while keeping the transparency very well.  相似文献   
6.
Due to a tremendous increase in internet traffic, backbone routers must have the capability to forward massive incoming packets at several gigabits per second. IP address lookup is one of the most challenging tasks for high-speed packet forwarding. Some high-end routers have been implemented with hardware parallelism using ternary content addressable memory (TCAM). However, TCAM is much more expensive in terms of circuit complexity as well as power consumption. Therefore, efficient algorithmic solutions are essentially required to be implemented using network processors as low cost solutions.Among the state-of-the-art algorithms for IP address lookup, a binary search based on a balanced tree is effective in providing a low-cost solution. In order to construct a balanced search tree, the prefixes with the nesting relationship should be converted into completely disjointed prefixes. A leaf-pushing technique is very useful to eliminate the nesting relationship among prefixes [V. Srinivasan, G. Varghese, Fast address lookups using controlled prefix expansion, ACM Transactions on Computer Systems 17 (1) (1999) 1-40]. However, it creates duplicate prefixes, thus expanding the search tree.This paper proposes an efficient IP address lookup algorithm based on a small balanced tree using entry reduction. The leaf-pushing technique is used for creating the completely disjointed entries. In the leaf-pushed prefixes, there are numerous pairs of adjacent prefixes with similarities in prefix strings and output ports. The number of entries can be significantly reduced by the use of a new entry reduction method which merges pairs with these similar prefixes. After sorting the reduced disjointed entries, a small balanced tree is constructed with a very small node size. Based on this small balanced tree, a native binary search can be effectively used in address lookup issue. In addition, we propose a new multi-way search algorithm to improve a binary search for IPv4 address lookup. As a result, the proposed algorithms offer excellent lookup performance along with reduced memory requirements. Besides, these provide good scalability for large amounts of routing data and for the address migration toward IPv6. Using both various IPv4 and IPv6 routing data, the performance evaluation results demonstrate that the proposed algorithms have better performance in terms of lookup speed, memory requirement and scalability for the growth of entries and IPv6, as compared with other algorithms based on a binary search.  相似文献   
7.
The purpose of this paper is two folded. Firstly, the concept of mean potentiality approach (MPA) has been developed and an algorithm based on this new approach has been proposed to get a balanced solution of a fuzzy soft set based decision making problem. Secondly, a parameter reduction procedure based on relational algebra with the help of the balanced algorithm of mean potentiality approach has been used to reduce the choice parameter set in the parlance of fuzzy soft set theory and it is justified to the problems of diagnosis of a disease from the myriad of symptoms from medical science. Moreover the feasibility of this proposed method is demonstrated by comparing with Analytical Hierarchy Process (AHP), Naive Bayes classification method and Feng's method.  相似文献   
8.
In this paper, three new Gramians are introduced namely ‐ limited‐time interval impulse response Gramians (LTIRG), generalized limited‐time Gramians (GLTG) and generalized limited‐time impulse response Gramians (GLTIRG). GLTG and GLTIRG are applicable to both unstable systems and also to systems which have eigenvalues of opposite polarities and equal magnitude. The concept of these Gramians is utilized to develop model reduction algorithms for linear time‐invariant continuous‐time single‐input single‐output (SISO) systems. In the cases of GLTIRG and GLTG based model reduction, the standard time‐limited Gramians are generalized to be applied to unstable systems by transforming the original system into a new system which requires the solution of two Riccati equations. Two numerical examples are included to illustrate the proposed methods. The results are also compared with standard techniques.  相似文献   
9.
Combined with a digital bored photography system and in-situ statistics concerning the joints and fissures of both ore-body and surrounding rock,a 2D discrete model was constructed using UDEC.The stress field and displacement field changes of different sublevel stoping systems were also studied.Changes in the overlying rock strata settlement pattern has been analyzed and validated by in-situ monitoring data.The results show that:in the caving process,there exists an obvious delay and jump for the overlying rock strata displacement over time,and a stable arch can be formed in the process of caving,which leads to hidden goafs.Disturbed by the mining activity,a stress increase occurred in both the hanging wall and the foot wall,demonstrating a hump-shaped distribution pattern.From the comparison between simulation results and in-situ monitoring results,land subsidence shows a slow-development,suddenfailure,slow-development cycle pattern,which leads eventually to a stable state.This pattern validates the existence of balanced arch and hidden goafs.  相似文献   
10.
为了有效地求解大型动力系统,现已提出了各种降维方法.根据非线性Galerkin方法的求解思路,我们将大型动力系统分解成三个子系统,即”慢子系统”、”适速子系统”和”快子系统”.在此基础上提出了改进的非线性Galerkin方法,即:在数值积分过程中将适速子系统的贡献导入慢子系统.然后,以一个含有立方非线性的5自由度强迫振动系统为例阐明了新方法的有效性.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号