首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4143篇
  免费   224篇
  国内免费   8篇
电工技术   42篇
综合类   3篇
化学工业   1044篇
金属工艺   31篇
机械仪表   78篇
建筑科学   92篇
矿业工程   4篇
能源动力   124篇
轻工业   471篇
水利工程   22篇
石油天然气   23篇
无线电   275篇
一般工业技术   570篇
冶金工业   875篇
原子能技术   24篇
自动化技术   697篇
  2024年   7篇
  2023年   51篇
  2022年   113篇
  2021年   147篇
  2020年   85篇
  2019年   117篇
  2018年   124篇
  2017年   109篇
  2016年   145篇
  2015年   124篇
  2014年   212篇
  2013年   308篇
  2012年   253篇
  2011年   287篇
  2010年   175篇
  2009年   186篇
  2008年   140篇
  2007年   149篇
  2006年   121篇
  2005年   90篇
  2004年   89篇
  2003年   67篇
  2002年   75篇
  2001年   35篇
  2000年   40篇
  1999年   64篇
  1998年   287篇
  1997年   149篇
  1996年   108篇
  1995年   71篇
  1994年   57篇
  1993年   62篇
  1992年   22篇
  1991年   30篇
  1990年   30篇
  1989年   38篇
  1988年   19篇
  1987年   28篇
  1986年   23篇
  1985年   23篇
  1984年   17篇
  1983年   5篇
  1982年   10篇
  1981年   9篇
  1980年   12篇
  1977年   13篇
  1976年   35篇
  1975年   3篇
  1974年   2篇
  1973年   2篇
排序方式: 共有4375条查询结果,搜索用时 0 毫秒
101.
Deep learning systems aim at using hierarchical models to learning high-level features from low-level features. The progress in deep learning is great in recent years. The robustness of the learning systems with deep architectures is however rarely studied and needs further investigation. In particular, the mean square error (MSE), a commonly used optimization cost function in deep learning, is rather sensitive to outliers (or impulsive noises). Robust methods are needed to improve the learning performance and immunize the harmful influences caused by outliers which are pervasive in real-world data. In this paper, we propose an efficient and robust deep learning model based on stacked auto-encoders and Correntropy-induced loss function (CLF), called CLF-based stacked auto-encoders (CSAE). CLF as a nonlinear measure of similarity is robust to outliers and can approximate different norms (from \(l_0\) to \(l_2\)) of data. Essentially, CLF is an MSE in reproducing kernel Hilbert space. Different from conventional stacked auto-encoders, which use, in general, the MSE as the reconstruction loss and KL divergence as the sparsity penalty term, the reconstruction loss and sparsity penalty term in CSAE are both built with CLF. The fine-tuning procedure in CSAE is also based on CLF, which can further enhance the learning performance. The excellent and robust performance of the proposed model is confirmed by simulation experiments on MNIST benchmark dataset.  相似文献   
102.
This paper treats the problem of how to determine weights in a ranking, which will cause a selected entity to attain the highest possible position. We establish that there are two types of entities in a ranking scheme: those which can be ranked as number one and those which cannot. These two types of entities can be identified using the “ranking hull” of the data; a polyhedral set that envelops the data. Only entities with data points on the boundary of this hull can attain the number one position. There are no weights that will make an entity whose data point is in the interior of the hull to ever attain the number one position. We deal with these two types of entities separately. In the first case, we propose an approach for finding a set of weights that, under special conditions, will result in a selected entity achieving the top of the ranking without ties and without ignoring any of the attributes. For the second category of entities, we devise a procedure to guarantee that these entities will attain their highest possible position in the ranking. The first case will require using interior point methods to solve a linear program (LP). The second case involves a binary mixed integer formulation. These two mathematical programs were tested on data from a well‐known university ranking.  相似文献   
103.
104.
There are a few issues that still need to be covered regarding security in the Grid area. One of them is authorization where there exist good solutions to define, manage and enforce authorization policies in Grid scenarios. However, these solutions usually do not provide Grid administrators with semantic-aware components closer to the particular Grid domain and easing different administration tasks such as conflict detection or resolution. This paper defines a proposal based on Semantic Web to define, manage and enforce security policies in a Grid scenario. These policies are defined by means of semantic-aware rules which help the administrator to create higher-level definitions with more expressiveness. These rules also permit performing added-value tasks such as conflict detection and resolution, which can be of interest in medium and large scale scenarios where different administrators define the authorization rules that should be followed before accessing a resource in the Grid. The proposed solution has been also tested providing some reasonable response times in the authorization decision process.  相似文献   
105.
Superpipelined high-performance optical-flow computation architecture   总被引:1,自引:0,他引:1  
Optical-flow computation is a well-known technique and there are important fields in which the application of this visual modality commands high interest. Nevertheless, most real-world applications require real-time processing, an issue which has only recently been addressed. Most real-time systems described to date use basic models which limit their applicability to generic tasks, especially when fast motion is presented or when subpixel motion resolution is required. Therefore, instead of implementing a complex optical-flow approach, we describe here a very high-frame-rate optical-flow processing system. Recent advances in image sensor technology make it possible nowadays to use high-frame-rate sensors to properly sample fast motion (i.e. as a low-motion scene), which makes a gradient-based approach one of the best options in terms of accuracy and consumption of resources for any real-time implementation. Taking advantage of the regular data flow of this kind of algorithm, our approach implements a novel superpipelined, fully parallelized architecture for optical-flow processing. The system is fully working and is organized into more than 70 pipeline stages, which achieve a data throughput of one pixel per clock cycle. This computing scheme is well suited to FPGA technology and VLSI implementation. The developed customized DSP architecture is capable of processing up to 170 frames per second at a resolution of 800 × 600 pixels. We discuss the advantages of high-frame-rate processing and justify the optical-flow model chosen for the implementation. We analyze this architecture, measure the system resource requirements using FPGA devices and finally evaluate the system’s performance and compare it with other approaches described in the literature.  相似文献   
106.
Intravascular clotting remains a major health problem in the United States, the most prominent being deep vein thrombosis, pulmonary embolism and thromboembolic stroke. Previous reports on the use of pyridine derivatives in cardiovascular drug development encourage us to pursue new types of compounds based on a pyridine scaffold. Eleven pyridine derivatives (oximes, semicarbazones, N-oxides) previously synthesized in our laboratories were tested as anticoagulants on pooled normal plasma using the prothrombin time (PT) protocol. The best anticoagulant within the oxime series was compound AF4, within the oxime N-oxide series was compound AF4-N-oxide, and within the semicarbazone series, compound MD1-30Y. We also used a molecular modeling approach to guide our efforts, and found that there was good correlation between coagulation data and computational energy scores. Molecular docking was performed to target the active site of thrombin with the DOCK v5.2 package. The results of molecular modeling indicate that improvement in anticoagulant activities can be expected by functionalization at the three-position of the pyridine ring and by N-oxide formation. Results reported here prove the suitability of DOCK in the lead optimization process.  相似文献   
107.
New viruses spread faster than ever and current signature based detection do not protect against these unknown viruses. Behavior based detection is the currently preferred defense against unknown viruses. The drawback of behavior based detection is the ability only to detect specific classes of viruses or have successful detection under certain conditions plus false positives. This paper presents a characterization of virus replication which is the only virus characteristic guaranteed to be consistently present in all viruses. Two detection models based on virus replication are developed, one using operation sequence matching and the other using frequency measures. Regression analysis was generated for both models. A safe list is used to minimize false positives. In our testing using operation sequence matching, over 250 viruses were detected with 43 subsequences. There were minimal false negatives. The replication sequence of just one virus detected 130 viruses, 45% of all tested viruses. Our testing using frequency measures detected all test viruses with no false negatives. The paper shows that virus replication can be identified and used to detect known and unknown viruses.  相似文献   
108.
This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.  相似文献   
109.
The prospect for improving the success of ab initio zeolite structure investigations with electron diffraction data is evaluated. First of all, the quality of intensities obtained by precession electron diffraction at small hollow cone illumination angles is evaluated for seven representative materials: ITQ-1, ITQ-7, ITQ-29, ZSM-5, ZSM-10, mordenite, and MCM-68. It is clear that, for most examples, an appreciable fraction of a secondary scattering perturbation is removed by precession at small angles. In one case, ZSM-10, it can also be argued that precession diffraction produces a dramatically improved 'kinematical' data set. There seems to no real support for application of a Lorentz correction to these data and there is no reason to expect for any of these samples that a two-beam dynamical scattering relationship between structure factor amplitude and observed intensity should be valid. Removal of secondary scattering by the precession mode appears to facilitate ab initio structure analysis. Most zeolite structures investigated could be solved by maximum entropy and likelihood phasing via error-correcting codes when precession data were used. Examples include the projected structure of mordenite that could not be determined from selected area data alone. One anomaly is the case of ZSM-5, where the best structure determination in projection is made from selected area diffraction data. In a control study, the zonal structure of SSZ-48 could be determined from selected area diffraction data by either maximum entropy and likelihood or traditional direct methods. While the maximum entropy and likelihood approach enjoys some advantages over traditional direct methods (non-dependence on predicted phase invariant sums), some effort must be made to improve the figures of merit used to identify potential structure solutions.  相似文献   
110.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号