首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3432篇
  免费   245篇
  国内免费   8篇
电工技术   42篇
综合类   3篇
化学工业   1064篇
金属工艺   32篇
机械仪表   78篇
建筑科学   92篇
矿业工程   4篇
能源动力   124篇
轻工业   474篇
水利工程   22篇
石油天然气   23篇
无线电   275篇
一般工业技术   575篇
冶金工业   150篇
原子能技术   24篇
自动化技术   703篇
  2024年   7篇
  2023年   51篇
  2022年   113篇
  2021年   147篇
  2020年   85篇
  2019年   117篇
  2018年   124篇
  2017年   111篇
  2016年   148篇
  2015年   124篇
  2014年   213篇
  2013年   310篇
  2012年   257篇
  2011年   288篇
  2010年   179篇
  2009年   190篇
  2008年   144篇
  2007年   151篇
  2006年   121篇
  2005年   90篇
  2004年   91篇
  2003年   70篇
  2002年   74篇
  2001年   36篇
  2000年   40篇
  1999年   38篇
  1998年   44篇
  1997年   32篇
  1996年   28篇
  1995年   21篇
  1994年   21篇
  1993年   25篇
  1992年   15篇
  1991年   17篇
  1990年   17篇
  1989年   27篇
  1988年   12篇
  1987年   17篇
  1986年   15篇
  1985年   12篇
  1984年   17篇
  1983年   5篇
  1982年   9篇
  1981年   7篇
  1980年   8篇
  1977年   3篇
  1976年   2篇
  1975年   2篇
  1974年   2篇
  1973年   2篇
排序方式: 共有3685条查询结果,搜索用时 15 毫秒
91.
Programming by demonstration techniques facilitate the programming of robots. Some of them allow the generalization of tasks through parameters, although they require new training when trajectories different from the ones used to estimate the model need to be added. One of the ways to re-train a robot is by incremental learning, which supplies additional information of the task and does not require teaching the whole task again. The present study proposes three techniques to add trajectories to a previously estimated task-parameterized Gaussian mixture model. The first technique estimates a new model by accumulating the new trajectory and the set of trajectories generated using the previous model. The second technique permits adding to the parameters of the existent model those obtained for the new trajectories. The third one updates the model parameters by running a modified version of the Expectation-Maximization algorithm, with the information of the new trajectories. The techniques were evaluated in a simulated task and a real one, and they showed better performance than that of the existent model.  相似文献   
92.
Deep learning systems aim at using hierarchical models to learning high-level features from low-level features. The progress in deep learning is great in recent years. The robustness of the learning systems with deep architectures is however rarely studied and needs further investigation. In particular, the mean square error (MSE), a commonly used optimization cost function in deep learning, is rather sensitive to outliers (or impulsive noises). Robust methods are needed to improve the learning performance and immunize the harmful influences caused by outliers which are pervasive in real-world data. In this paper, we propose an efficient and robust deep learning model based on stacked auto-encoders and Correntropy-induced loss function (CLF), called CLF-based stacked auto-encoders (CSAE). CLF as a nonlinear measure of similarity is robust to outliers and can approximate different norms (from \(l_0\) to \(l_2\)) of data. Essentially, CLF is an MSE in reproducing kernel Hilbert space. Different from conventional stacked auto-encoders, which use, in general, the MSE as the reconstruction loss and KL divergence as the sparsity penalty term, the reconstruction loss and sparsity penalty term in CSAE are both built with CLF. The fine-tuning procedure in CSAE is also based on CLF, which can further enhance the learning performance. The excellent and robust performance of the proposed model is confirmed by simulation experiments on MNIST benchmark dataset.  相似文献   
93.
This paper treats the problem of how to determine weights in a ranking, which will cause a selected entity to attain the highest possible position. We establish that there are two types of entities in a ranking scheme: those which can be ranked as number one and those which cannot. These two types of entities can be identified using the “ranking hull” of the data; a polyhedral set that envelops the data. Only entities with data points on the boundary of this hull can attain the number one position. There are no weights that will make an entity whose data point is in the interior of the hull to ever attain the number one position. We deal with these two types of entities separately. In the first case, we propose an approach for finding a set of weights that, under special conditions, will result in a selected entity achieving the top of the ranking without ties and without ignoring any of the attributes. For the second category of entities, we devise a procedure to guarantee that these entities will attain their highest possible position in the ranking. The first case will require using interior point methods to solve a linear program (LP). The second case involves a binary mixed integer formulation. These two mathematical programs were tested on data from a well‐known university ranking.  相似文献   
94.
95.
The computerized design of advanced straight and skew bevel gears produced by precision forging is proposed. Modifications of the tooth surfaces of one of the members of the gear set are proposed in order to localize the bearing contact and predesign a favorable function of transmission errors. The proposed modifications of the tooth surfaces will be computed by using a modified imaginary crown-gear and applied in manufacturing through the use of the proper die geometry. The geometry of the die is obtained for each member of the gear set from their theoretical geometry obtained considering its generation by the corresponding imaginary crown-gear. Two types of surface modification, whole and partial crowning, are investigated in order to get the more effective way of surface modification of skew and straight bevel gears. A favorable function of transmission errors is predesigned to allow low levels of noise and vibration of the gear drive. Numerical examples of design of both skew and straight bevel gear drives are included to illustrate the advantages of the proposed geometry.  相似文献   
96.
There are a few issues that still need to be covered regarding security in the Grid area. One of them is authorization where there exist good solutions to define, manage and enforce authorization policies in Grid scenarios. However, these solutions usually do not provide Grid administrators with semantic-aware components closer to the particular Grid domain and easing different administration tasks such as conflict detection or resolution. This paper defines a proposal based on Semantic Web to define, manage and enforce security policies in a Grid scenario. These policies are defined by means of semantic-aware rules which help the administrator to create higher-level definitions with more expressiveness. These rules also permit performing added-value tasks such as conflict detection and resolution, which can be of interest in medium and large scale scenarios where different administrators define the authorization rules that should be followed before accessing a resource in the Grid. The proposed solution has been also tested providing some reasonable response times in the authorization decision process.  相似文献   
97.
A SAT Solver Using Reconfigurable Hardware and Virtual Logic   总被引:1,自引:0,他引:1  
In this paper, we present the architecture of a new SAT solver using reconfigurable logic and a virtual logic scheme. Our main contributions include new forms of massive fine-grain parallelism, structured design techniques based on iterative logic arrays that reduce compilation times from hours to minutes, and a decomposition technique that creates independent subproblems that may be concurrently solved by unconnected FPGAs. The decomposition technique is the basis of the virtual logic scheme, since it allows solving problems that exceed the hardware capacity. Our architecture is easily scalable. Our results show several orders of magnitude speedup compared with a state-of-the-art software implementation, and also with respect to prior SAT solvers using reconfigurable hardware.  相似文献   
98.
Superpipelined high-performance optical-flow computation architecture   总被引:1,自引:0,他引:1  
Optical-flow computation is a well-known technique and there are important fields in which the application of this visual modality commands high interest. Nevertheless, most real-world applications require real-time processing, an issue which has only recently been addressed. Most real-time systems described to date use basic models which limit their applicability to generic tasks, especially when fast motion is presented or when subpixel motion resolution is required. Therefore, instead of implementing a complex optical-flow approach, we describe here a very high-frame-rate optical-flow processing system. Recent advances in image sensor technology make it possible nowadays to use high-frame-rate sensors to properly sample fast motion (i.e. as a low-motion scene), which makes a gradient-based approach one of the best options in terms of accuracy and consumption of resources for any real-time implementation. Taking advantage of the regular data flow of this kind of algorithm, our approach implements a novel superpipelined, fully parallelized architecture for optical-flow processing. The system is fully working and is organized into more than 70 pipeline stages, which achieve a data throughput of one pixel per clock cycle. This computing scheme is well suited to FPGA technology and VLSI implementation. The developed customized DSP architecture is capable of processing up to 170 frames per second at a resolution of 800 × 600 pixels. We discuss the advantages of high-frame-rate processing and justify the optical-flow model chosen for the implementation. We analyze this architecture, measure the system resource requirements using FPGA devices and finally evaluate the system’s performance and compare it with other approaches described in the literature.  相似文献   
99.
Intravascular clotting remains a major health problem in the United States, the most prominent being deep vein thrombosis, pulmonary embolism and thromboembolic stroke. Previous reports on the use of pyridine derivatives in cardiovascular drug development encourage us to pursue new types of compounds based on a pyridine scaffold. Eleven pyridine derivatives (oximes, semicarbazones, N-oxides) previously synthesized in our laboratories were tested as anticoagulants on pooled normal plasma using the prothrombin time (PT) protocol. The best anticoagulant within the oxime series was compound AF4, within the oxime N-oxide series was compound AF4-N-oxide, and within the semicarbazone series, compound MD1-30Y. We also used a molecular modeling approach to guide our efforts, and found that there was good correlation between coagulation data and computational energy scores. Molecular docking was performed to target the active site of thrombin with the DOCK v5.2 package. The results of molecular modeling indicate that improvement in anticoagulant activities can be expected by functionalization at the three-position of the pyridine ring and by N-oxide formation. Results reported here prove the suitability of DOCK in the lead optimization process.  相似文献   
100.
New viruses spread faster than ever and current signature based detection do not protect against these unknown viruses. Behavior based detection is the currently preferred defense against unknown viruses. The drawback of behavior based detection is the ability only to detect specific classes of viruses or have successful detection under certain conditions plus false positives. This paper presents a characterization of virus replication which is the only virus characteristic guaranteed to be consistently present in all viruses. Two detection models based on virus replication are developed, one using operation sequence matching and the other using frequency measures. Regression analysis was generated for both models. A safe list is used to minimize false positives. In our testing using operation sequence matching, over 250 viruses were detected with 43 subsequences. There were minimal false negatives. The replication sequence of just one virus detected 130 viruses, 45% of all tested viruses. Our testing using frequency measures detected all test viruses with no false negatives. The paper shows that virus replication can be identified and used to detect known and unknown viruses.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号