首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6667篇
  免费   492篇
  国内免费   18篇
电工技术   128篇
综合类   9篇
化学工业   1450篇
金属工艺   246篇
机械仪表   416篇
建筑科学   128篇
矿业工程   2篇
能源动力   241篇
轻工业   632篇
水利工程   22篇
石油天然气   6篇
无线电   1160篇
一般工业技术   1425篇
冶金工业   419篇
原子能技术   71篇
自动化技术   822篇
  2024年   5篇
  2023年   84篇
  2022年   100篇
  2021年   226篇
  2020年   156篇
  2019年   186篇
  2018年   228篇
  2017年   224篇
  2016年   253篇
  2015年   228篇
  2014年   322篇
  2013年   445篇
  2012年   451篇
  2011年   534篇
  2010年   347篇
  2009年   397篇
  2008年   352篇
  2007年   267篇
  2006年   282篇
  2005年   226篇
  2004年   209篇
  2003年   185篇
  2002年   179篇
  2001年   145篇
  2000年   123篇
  1999年   127篇
  1998年   206篇
  1997年   139篇
  1996年   79篇
  1995年   83篇
  1994年   67篇
  1993年   59篇
  1992年   45篇
  1991年   37篇
  1990年   30篇
  1989年   23篇
  1988年   10篇
  1987年   17篇
  1986年   16篇
  1985年   12篇
  1984年   15篇
  1983年   10篇
  1982年   4篇
  1981年   3篇
  1980年   5篇
  1979年   7篇
  1978年   3篇
  1977年   5篇
  1976年   10篇
  1975年   4篇
排序方式: 共有7177条查询结果,搜索用时 31 毫秒
141.
142.
Finding security vulnerabilities requires a different mindset than finding general faults in software—thinking like an attacker. Therefore, security engineers looking to prioritize security inspection and testing efforts may be better served by a prediction model that indicates security vulnerabilities rather than faults. At the same time, faults and vulnerabilities have commonalities that may allow development teams to use traditional fault prediction models and metrics for vulnerability prediction. The goal of our study is to determine whether fault prediction models can be used for vulnerability prediction or if specialized vulnerability prediction models should be developed when both models are built with traditional metrics of complexity, code churn, and fault history. We have performed an empirical study on a widely-used, large open source project, the Mozilla Firefox web browser, where 21% of the source code files have faults and only 3% of the files have vulnerabilities. Both the fault prediction model and the vulnerability prediction model provide similar ability in vulnerability prediction across a wide range of classification thresholds. For example, the fault prediction model provided recall of 83% and precision of 11% at classification threshold 0.6 and the vulnerability prediction model provided recall of 83% and precision of 12% at classification threshold 0.5. Our results suggest that fault prediction models based upon traditional metrics can substitute for specialized vulnerability prediction models. However, both fault prediction and vulnerability prediction models require significant improvement to reduce false positives while providing high recall.  相似文献   
143.
As social media services such as Twitter and Facebook are gaining popularity, the amount of information published from those services is explosively growing. Most of them use feeds to facilitate distribution of a huge volume of content they publish. In this context, many users subscribe to feeds to acquire up-to-date information through feed aggregation services, and recent real-time search engines also increasingly utilize feeds to promptly find recent web content when it is produced. Accordingly, it is necessary for such services to effectively fetch feeds for minimizing fetching delay, while at the same time maximizing the number of fetched entries. Fetching delay is a time lag between entry publication and retrieval, which is primarily incurred by finiteness of fetching resources. In this paper, we consider a polling-based approach among the methods applicable to fetching feeds, which bases on a specific schedule for visiting feeds. While the existing polling-based approaches have focused on the allocation of fetching resources to feeds in order to either reduce the fetching delay or increase the number of fetched entries, we propose a resource allocation policy that can optimize both objectives. Extensive experiments have been carried out to evaluate the proposed model, in comparison with the existing alternative methods.  相似文献   
144.
In microscopic image processing for analyzing biological objects, structural characters of objects such as symmetry and orientation can be used as a prior knowledge to improve the results. In this study, we incorporated filamentous local structures of neurons into a statistical model of image patches and then devised an image processing method based on tensor factorization with image patch rotation. Tensor factorization enabled us to incorporate correlation structure between neighboring pixels, and patch rotation helped us obtain image bases that well reproduce filamentous structures of neurons. We applied the proposed model to a microscopic image and found significant improvement in image restoration performance over existing methods, even with smaller number of bases.  相似文献   
145.
ABSTRACT

This study focuses on the decisive role played by the digital design environment in the cognitive design process and design thinking. To analyse the cognitive role of digital design tools, we carried out a protocol analysis of conventional design sketching and a 3D sculpture tool. Cognitive evaluation was a differentiating factor when considering the contextual role of the 3D sculpture tool in subsequent evaluations, non-sequential evaluations for conversion, and passive approaches within the design process. Cognitive evaluation played the following roles: validation, extension, navigation, exploration, and confirmation. The navigation, exploration, and extension roles played by non-sequential evaluation were mainly related to inductive design thinking. Finally, the types of cognitive evaluation and their roles when using the 3D sculpture tool were different, according to the design thinking type. This study explored the multidimensional roles of cognitive evaluation using a 3D sculpture tool and its relationship with design thinking types.  相似文献   
146.
Abstract: Pedestrian detection techniques are important and challenging especially for complex real world scenes. They can be used for ensuring pedestrian safety, ADASs (advance driver assistance systems) and safety surveillance systems. In this paper, we propose a novel approach for multi-person tracking-by-detection using deformable part models in Kalman filtering framework. The Kalman filter is used to keep track of each person and a unique label is assigned to each tracked individual. Based on this approach, people can enter and leave the scene randomly. We test and demonstrate our results on Caltech Pedestrian benchmark, which is two orders of magnitude larger than any other existing datasets and consists of pedestrians varying widely in appearance, pose and scale. Complex situations such as people occluded by each other are handled gracefully and individual persons can be tracked correctly after a group of people split. Experiments confirm the real-time performance and robustness of our system, working in complex scenes. Our tracking model gives a tracking accuracy of 72.8% and a tracking precision of 82.3%. We can further reduce false positives by 2.8%, using Kalman filtering.  相似文献   
147.
This paper investigates an algorithm for robust fault diagnosis (FD) in uncertain robotic systems by using a neural sliding mode (NSM) based observer strategy. A step by step design procedure will be discussed to determine the accuracy of fault estimation. First, an uncertainty observer is designed to estimate the uncertainties based on a first neural network (NN1). Then, based on the estimated uncertainties, a fault diagnosis scheme will be designed by using a NSM observer which consists of both a second neural network (NN2) and a second order sliding mode (SOSM), connected serially. This type of observer scheme can reduce the chattering of sliding mode (SM) and guarantee finite time convergence of the neural network (NN). The obtained fault estimations are used for fault isolation as well as fault accommodation to self-correct the failure systems. The computer simulation results for a PUMA560 robot are shown to verify the effectiveness of the proposed strategy.  相似文献   
148.
For the Visible Korean Human (VKH), a male cadaver was serially ground off to acquire the serially sectioned images (SSIs) of a whole human body. Thereafter, more than 700 structures in the SSIs were outlined to produce detailed segmented images; the SSIs and segmented images were volume- and surface-reconstructed to create three-dimensional models. For outlining and reconstruction, popular software (Photoshop, MRIcro, Maya, AutoCAD, 3ds max, and Rhino) was mainly used; the technique can be reproduced by other investigators for creating their own images. For refining the segmentation and volume reconstruction, the VOXEL-MAN system was used. The continuously upgraded technique was applied to a female cadaver's pelvis to produce the SSIs with 0.1mm sized intervals and 0.1mm x 0.1mm sized pixels. The VKH data, distributed worldwide, encouraged researchers to develop virtual dissection, virtual endoscopy, and virtual lumbar puncture contributing to medical education and clinical practice. In the future, a virtual image library including all the Visible Human Project data, Chinese Visible Human data, and VKH data will hopefully be established where users will be able to download one of the data sets for medical applications.  相似文献   
149.
Probe design is one of the most important tasks in successful deoxyribonucleic acid microarray experiments. We propose a multiobjective evolutionary optimization method for oligonucleotide probe design based on the multiobjective nature of the probe design problem. The proposed multiobjective evolutionary approach has several distinguished features, compared with previous methods. First, the evolutionary approach can find better probe sets than existing simple filtering methods with fixed threshold values. Second, the multiobjective approach can easily incorporate the user's custom criteria or change the existing criteria. Third, our approach tries to optimize the combination of probes for the given set of genes, in contrast to other tools that independently search each gene for qualifying probes. Lastly, the multiobjective optimization method provides various sets of probe combinations, among which the user can choose, depending on the target application. The proposed method is implemented as a platform called EvoOligo and is available for service on the Web. We test the performance of EvoOligo by designing probe sets for 19 types of Human Papillomavirus and 52 genes in the Arabidopsis Calmodulin multigene family. The design results from EvoOligo are proven to be superior to those from well-known existing probe design tools, such as OligoArray and OligoWiz.  相似文献   
150.
For more efficient and economical management of substations under SCADA (Supervisory Control and Data Acquisition) systems, the concept of a smart substation has been introduced using intelligent and ubiquitous IT (Information Technology) techniques. A multi-functional platform needs to perform more intelligent and ubiquitous functions of smart substation effectively. In this paper, we propose a multi-functional platform to implement smart substations effectively. A prototype hardware, functions and communication interfaces on an embedded platform are introduced. Also, we suggest operating system architecture for smart substations.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号