首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8450篇
  免费   1628篇
  国内免费   1314篇
电工技术   663篇
综合类   1396篇
化学工业   531篇
金属工艺   59篇
机械仪表   435篇
建筑科学   439篇
矿业工程   118篇
能源动力   216篇
轻工业   682篇
水利工程   150篇
石油天然气   143篇
武器工业   89篇
无线电   990篇
一般工业技术   731篇
冶金工业   159篇
原子能技术   69篇
自动化技术   4522篇
  2024年   83篇
  2023年   166篇
  2022年   331篇
  2021年   314篇
  2020年   369篇
  2019年   376篇
  2018年   365篇
  2017年   426篇
  2016年   450篇
  2015年   468篇
  2014年   589篇
  2013年   703篇
  2012年   760篇
  2011年   795篇
  2010年   577篇
  2009年   562篇
  2008年   592篇
  2007年   640篇
  2006年   557篇
  2005年   433篇
  2004年   384篇
  2003年   260篇
  2002年   236篇
  2001年   182篇
  2000年   149篇
  1999年   110篇
  1998年   99篇
  1997年   84篇
  1996年   71篇
  1995年   49篇
  1994年   50篇
  1993年   35篇
  1992年   30篇
  1991年   26篇
  1990年   13篇
  1989年   12篇
  1988年   8篇
  1987年   5篇
  1986年   10篇
  1985年   5篇
  1984年   5篇
  1983年   3篇
  1981年   2篇
  1980年   1篇
  1976年   1篇
  1974年   1篇
  1963年   2篇
  1959年   2篇
  1951年   1篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
11.
文中基于热叠加原理研究了固态功率组件中多个离散分布的集中热源的热耦合效应,并证明强迫对流下应用热叠加原理计算的温度场与整场系统数值模拟的结果相当吻合,用它来进行热耦合效应的分析研究是有效的.  相似文献   
12.
As a representative deep learning network, Convolutional Neural Network (CNN) has been extensively used in bearing fault diagnosis and many good results have been reported. In Prognostics and Health Management (PHM) field, the CNN’s input size is usually designed as a 1D vector or 2D square matrix, and the convolution kernel size is also defined as a square shape like 3 × 3 and 5 × 5, which are directly adopted from the image recognition. Though satisfying results can be obtained, CNN with such parameter specifications is not optimal and efficient. To this end, this paper elaborated the physical characteristics of bearing acceleration signals to guide the CNN design. First, the fault period under different fault types and shaft rotation frequency were used to determine the size of CNN’s input. Next, an exponential function was involved in fitting the envelope of decaying acceleration signal during each fault period, and signal length within different decaying ratios was used to define the CNN’s kernel size. Finally, the designed CNN was validated with the Case Western Reserve University bearing dataset and Paderborn University bearing dataset. Results confirm that the physics-guided CNN (PGCNN) with rectangular input shape and rectangular convolution kernel works better than the baseline CNN with higher accuracy and smaller uncertainty. The feasibility of designing CNN parameters with physics-guided rules derived from bearing fault signal analysis has also been verified.  相似文献   
13.
We study ordinal embedding relaxations in the realm of parameterized complexity. We prove the existence of a quadratic kernel for the Betweenness problem parameterized above its tight lower bound, which is stated as follows. For a set V of variables and set C of constraints “vi is between vj and vk”, decide whether there is a bijection from V to the set {1,…,|V|} satisfying at least |C|/3+κ of the constraints in C. Our result solves an open problem attributed to Benny Chor in Niedermeier's monograph “Invitation to Fixed-Parameter Algorithms”. The betweenness problem is of interest in molecular biology. An approach developed in this paper can be used to determine parameterized complexity of a number of other optimization problems on permutations parameterized above or below tight bounds.  相似文献   
14.
This paper studies the problem of stabilizing a linear system with delayed and saturating feedback. It is known that the eigenstructure assignment‐based low‐gain feedback law (globally) stabilizes a linear system in the presence of arbitrarily large delay in its input, and semi‐globally stabilizes it when the input is also subject to saturation, as long as all its open‐loop poles are located in the closed left‐half plane. Based on a recently developed parametric Lyapunov equation‐based low‐gain feedback design method, this paper presents alternative, but simpler and more elegant, feedback laws that solve these problems. The advantages of this new approach include its simplicity, the capability of giving explicit conditions to guarantee the stability of the closed‐loop system, and the ease in scheduling the low‐gain parameter on line to achieve global stabilization in the presence of actuator saturation. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
15.
In this paper we propose a heuristic approach for the problem of packing equal rectangles within a convex region. The approach is based on an Iterated Local Search scheme, in which the key step is the perturbation move. Different perturbation moves, both combinatorial and continuous ones, are proposed and compared through extensive computational experiments on a set of test instances. The overall results are quite encouraging.  相似文献   
16.
基于特征的文档图像检索   总被引:1,自引:0,他引:1       下载免费PDF全文
张田  王希常  尘昌华 《计算机工程》2009,35(22):176-178
提出一种综合利用文档图像的段落特征和局部像素分布相对差特征进行文档图像检索的方法。给出段落特征和局部像素分布相对差特征的定义、提取方法以及基于这2个特征结合使用的检索方法。段落特征这一全局特征以及局部像素分布相对差特征这一局部特征相结合能够较好地表征和区分文档图像,检索方法将两者充分结合取得较好的效果。  相似文献   
17.
引入不确定性理论的P2P信誉模型研究   总被引:1,自引:0,他引:1       下载免费PDF全文
席菁  胡平  陆建德 《计算机工程》2009,35(8):158-160
P2P系统匿名、动态的特点使得它成为各种自私和恶意行为的温床。为了解决因P2P的高度动态特性而引起的节点欺骗问题,基于概率模型和模糊数学的基本思想,构建一个P2P安全信任模型——BGTR,实验证明,通过建立BGTR模型,P2P网络可以获得更高的安全性,能够较好地解决冒名、协同作弊以及“搭车行为”等问题  相似文献   
18.
The centroid-based classifier is both effective and efficient for document classification. However, it suffers from over-fitting and linear inseparability problems caused by its fundamental assumptions. To address these problems, we propose a kernel-based hypothesis margin centroid classifier (KHCC). First, KHCC optimises the class centroids via minimising hypothesis margin under structural risk minimisation principle; second, KHCC uses the kernel method to relieve the problem of linear inseparability in the original feature space. Given the radial basis function, we further discuss a guideline for tuning the value of its parameter. The experimental results on four well-known data-sets indicate that our KHCC algorithm outperforms the state-of-the-art algorithms, especially for the unbalanced data-set.  相似文献   
19.
20.
This paper proposes a probabilistic variant of the SOM-kMER (Self Organising Map-kernel-based Maximum Entropy learning Rule) model for data classification. The classifier, known as pSOM-kMER (probabilistic SOM-kMER), is able to operate in a probabilistic environment and to implement the principles of statistical decision theory in undertaking classification problems. A distinctive feature of pSOM-kMER is its ability in revealing the underlying structure of data. In addition, the Receptive Field (RF) regions generated can be used for variable kernel and non-parametric density estimation. Empirical evaluation using benchmark datasets shows that pSOM-kMER is able to achieve good performance as compared with those from a number of machine learning systems. The applicability of the proposed model as a useful data classifier is also demonstrated with a real-world medical data classification problem.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号