首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   21140篇
  免费   1620篇
  国内免费   41篇
电工技术   136篇
综合类   116篇
化学工业   5652篇
金属工艺   457篇
机械仪表   377篇
建筑科学   737篇
矿业工程   59篇
能源动力   656篇
轻工业   5066篇
水利工程   200篇
石油天然气   171篇
武器工业   1篇
无线电   939篇
一般工业技术   3298篇
冶金工业   1987篇
原子能技术   115篇
自动化技术   2834篇
  2024年   38篇
  2023年   169篇
  2022年   535篇
  2021年   907篇
  2020年   566篇
  2019年   646篇
  2018年   811篇
  2017年   833篇
  2016年   861篇
  2015年   655篇
  2014年   972篇
  2013年   1795篇
  2012年   1444篇
  2011年   1570篇
  2010年   1192篇
  2009年   1126篇
  2008年   1028篇
  2007年   906篇
  2006年   733篇
  2005年   559篇
  2004年   502篇
  2003年   526篇
  2002年   436篇
  2001年   351篇
  2000年   283篇
  1999年   285篇
  1998年   779篇
  1997年   509篇
  1996年   360篇
  1995年   245篇
  1994年   202篇
  1993年   148篇
  1992年   79篇
  1991年   56篇
  1990年   49篇
  1989年   54篇
  1988年   63篇
  1987年   50篇
  1986年   33篇
  1985年   55篇
  1984年   44篇
  1983年   43篇
  1982年   37篇
  1981年   25篇
  1980年   42篇
  1979年   21篇
  1978年   20篇
  1977年   30篇
  1976年   54篇
  1973年   17篇
排序方式: 共有10000条查询结果,搜索用时 593 毫秒
181.
The Journal of Supercomputing - Left ventricular non-compaction (LVNC) is a rare cardiomyopathy characterized by abnormal trabeculations in the left ventricle cavity. Although traditional computer...  相似文献   
182.
The Journal of Supercomputing - DNA methylation analysis has become an important topic in the study of human health. DNA methylation analysis requires not only a specific treatment of DNA samples...  相似文献   
183.
Distributed and Parallel Databases - Given two datasets of points (called Query and Training), the Group (K) Nearest-Neighbor (GKNN) query retrieves (K) points of the Training with the smallest sum...  相似文献   
184.
Neural Computing and Applications - In this paper, we present the novel Deep-MEG approach in which image-based representations of magnetoencephalography (MEG) data are combined with ensemble...  相似文献   
185.
Software Quality Journal - Energy consumption of software has been becoming increasingly significant, since it can vary according to how the software has been developed. In recent years, developers...  相似文献   
186.
Conventional constant false alarm rate (CFAR) methods use a fixed number of cells to estimate the background variance. For homogeneous environments, it is desirable to increase the number of cells, at the cost of increased computation and memory requirements, in order to improve the estimation performance. For nonhomogeneous environments, it is desirable to use less number of cells in order to reduce the number of false alarms around the clutter edges. In this work, we present a solution with two exponential smoothers (first order IIR filters) having different time-constants to leverage the conflicting requirements of homogeneous and nonhomogeneous environments. The system is designed to use the filter having the large time-constant in homogeneous environments and to promptly switch to the filter having the small time constant once a clutter edge is encountered. The main advantages of proposed Switching IIR CFAR method are computational simplicity, small memory requirement (in comparison to windowing based methods) and its good performance in homogeneous environments (due to the large time-constant smoother) and rapid adaptation to clutter edges (due to the small time-constant smoother).  相似文献   
187.
The issue of trust is a research problem in emerging open environments, such as ubiquitous networks. Such environments are highly dynamic and they contain diverse number of services and autonomous entities. Entities in open environments have different security needs from services. Trust computations related to the security systems of services necessitate information that meets needs of each entity. Obtaining such information is a challenging issue for entities. In this paper, we propose a model for extracting trust information from the security system of a service based on the needs of an entity. We formally represent security policies and security systems to extract trust information according to needs of an entity. The formal representation ensures an entity to extract trust information about a security property of a service and trust information about whole security system of the service. The proposed model is applied to Dental Clinic Patient Service as a case study with two scenarios. The scenarios are analyzed experimentally with simulations. The experimental evaluation shows that the proposed model provides trust information related to the security system of a service based on the needs of an entity and it is applicable in emerging open environments.  相似文献   
188.
We introduce two-dimensional neural maps for exploring connectivity in the brain. For this, we create standard streamtube models from diffusion-weighted brain imaging data sets along with neural paths hierarchically projected into the plane. These planar neural maps combine desirable properties of low-dimensional representations, such as visual clarity and ease of tract-of-interest selection, with the anatomical familiarity of 3D brain models and planar sectional views. We distribute this type of visualization both in a traditional stand-alone interactive application and as a novel, lightweight web-accessible system. The web interface integrates precomputed neural-path representations into a geographical digital-maps framework with associated labels, metrics, statistics, and linkouts. Anecdotal and quantitative comparisons of the present method with a recently proposed 2D point representation suggest that our representation is more intuitive and easier to use and learn. Similarly, users are faster and more accurate in selecting bundles using the 2D path representation than the 2D point representation. Finally, expert feedback on the web interface suggests that it can be useful for collaboration as well as quick exploration of data.  相似文献   
189.
We focus on two aspects of the face recognition, feature extraction and classification. We propose a two component system, introducing Lattice Independent Component Analysis (LICA) for feature extraction and Extreme Learning Machines (ELM) for classification. In previous works we have proposed LICA for a variety of image processing tasks. The first step of LICA is to identify strong lattice independent components from the data. In the second step, the set of strong lattice independent vector are used for linear unmixing of the data, obtaining a vector of abundance coefficients. The resulting abundance values are used as features for classification, specifically for face recognition. Extreme Learning Machines are accurate and fast-learning innovative classification methods based on the random generation of the input-to-hidden-units weights followed by the resolution of the linear equations to obtain the hidden-to-output weights. The LICA-ELM system has been tested against state-of-the-art feature extraction methods and classifiers, outperforming them when performing cross-validation on four large unbalanced face databases.  相似文献   
190.
Membrane Computing is a discipline aiming to abstract formal computing models, called membrane systems or P systems, from the structure and functioning of the living cells as well as from the cooperation of cells in tissues, organs, and other higher order structures. This framework provides polynomial time solutions to NP-complete problems by trading space for time, and whose efficient simulation poses challenges in three different aspects: an intrinsic massively parallelism of P systems, an exponential computational workspace, and a non-intensive floating point nature. In this paper, we analyze the simulation of a family of recognizer P systems with active membranes that solves the Satisfiability problem in linear time on different instances of Graphics Processing Units (GPUs). For an efficient handling of the exponential workspace created by the P systems computation, we enable different data policies to increase memory bandwidth and exploit data locality through tiling and dynamic queues. Parallelism inherent to the target P system is also managed to demonstrate that GPUs offer a valid alternative for high-performance computing at a considerably lower cost. Furthermore, scalability is demonstrated on the way to the largest problem size we were able to run, and considering the new hardware generation from Nvidia, Fermi, for a total speed-up exceeding four orders of magnitude when running our simulations on the Tesla S2050 server.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号