首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   90517篇
  免费   1358篇
  国内免费   449篇
电工技术   979篇
综合类   2335篇
化学工业   12878篇
金属工艺   4924篇
机械仪表   3186篇
建筑科学   2381篇
矿业工程   570篇
能源动力   1610篇
轻工业   4315篇
水利工程   1333篇
石油天然气   494篇
武器工业   4篇
无线电   10066篇
一般工业技术   17504篇
冶金工业   3057篇
原子能技术   328篇
自动化技术   26360篇
  2023年   173篇
  2022年   380篇
  2021年   535篇
  2020年   381篇
  2019年   385篇
  2018年   14790篇
  2017年   13606篇
  2016年   10231篇
  2015年   824篇
  2014年   578篇
  2013年   771篇
  2012年   3502篇
  2011年   9775篇
  2010年   8507篇
  2009年   5760篇
  2008年   6961篇
  2007年   7945篇
  2006年   278篇
  2005年   1348篇
  2004年   1242篇
  2003年   1265篇
  2002年   649篇
  2001年   155篇
  2000年   239篇
  1999年   130篇
  1998年   172篇
  1997年   133篇
  1996年   114篇
  1995年   87篇
  1994年   63篇
  1993年   58篇
  1992年   51篇
  1991年   45篇
  1989年   42篇
  1988年   55篇
  1985年   42篇
  1984年   49篇
  1983年   41篇
  1976年   30篇
  1968年   50篇
  1967年   42篇
  1966年   44篇
  1965年   44篇
  1960年   31篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   35篇
  1955年   64篇
  1954年   70篇
排序方式: 共有10000条查询结果,搜索用时 171 毫秒
981.
Inductive power transfer (IPT) systems facilitate contactless power transfer between two sides and across an air-gap, through weak magnetic coupling. However, IPT systems constitute a high order resonant circuit and, as such, are difficult to design and control. Aiming at the control problems for bidirectional IPT system, a neural networks based proportional-integral-derivative (PID) control strategy is proposed in this paper. In the proposed neural PID method, the PID gains, \(K_{P}\), \(K_{I}\) and \(K_{D}\) are treated as Gaussian potential function networks (GPFN) weights and they are adjusted using online learning algorithm. In this manner, the neural PID controller has more flexibility and capability than conventional PID controller with fixed gains. The convergence of the GPFN weights learning is guaranteed using Lyapunov method. Simulations are used to test the effective performance of the proposed controller.  相似文献   
982.
With the advent of low-cost 3D sensors and 3D printers, scene and object 3D surface reconstruction has become an important research topic in the last years. In this work, we propose an automatic (unsupervised) method for 3D surface reconstruction from raw unorganized point clouds acquired using low-cost 3D sensors. We have modified the growing neural gas network, which is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation, to perform 3D surface reconstruction of different real-world objects and scenes. Some improvements have been made on the original algorithm considering colour and surface normal information of input data during the learning stage and creating complete triangular meshes instead of basic wire-frame representations. The proposed method is able to successfully create 3D faces online, whereas existing 3D reconstruction methods based on self-organizing maps required post-processing steps to close gaps and holes produced during the 3D reconstruction process. A set of quantitative and qualitative experiments were carried out to validate the proposed method. The method has been implemented and tested on real data, and has been found to be effective at reconstructing noisy point clouds obtained using low-cost 3D sensors.  相似文献   
983.
With the rapid development of visual digital media, the demand for better quality of service has increased the pressure on broadcasters to automate their error detection and restoration activities for preserving their archives. Digital dropout is one of the defects that affect archived visual materials and tends to occur in block by block basis (size of 8 × 8). It is well established that human visual system (HVS) is highly adapted to the statistics of its visual natural environment. Consequently, in this paper, we have formulated digital dropout detection as a classification problem which predicts block label based on statistical features. These statistical features are indicative of perceptual quality relevant to human visual perception, and allow pristine images to be distinguished from distorted ones. Here, the idea is to extract discriminant block statistical features based on discrete cosine transform (DCT) coefficients and determine an optimal neighborhood sampling strategy to enhance the discrimination ability of block representation. Since this spatial frame based approach is free from any motion computation dependency, it works perfectly in the presence of fast moving objects. Experiments are performed on video archives to evaluate the efficacy of the proposed technique.  相似文献   
984.
Audio fingerprinting allows us to label an unidentified music fragment within a previously generated database. The use of spectral landmarks aims to obtain a robustness that lets a certain level of noise be present in the audio query. This group of audio identification algorithms holds several configuration parameters whose values are usually chosen based upon the researcher’s knowledge, previous published experimentation or just trial and error methods. In this paper we describe the whole optimisation process of a Landmark-based Music Recognition System using genetic algorithms. We define the actual structure of the algorithm as a chromosome by transforming its high relevant parameters into various genes and building up an appropriate fitness evaluation method. The optimised output parameters are used to set up a complete system that is compared with a non-optimised one by designing an unbiased evaluation model.  相似文献   
985.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   
986.
Software Engineering activities are information intensive. Research proposes Information Retrieval (IR) techniques to support engineers in their daily tasks, such as establishing and maintaining traceability links, fault identification, and software maintenance. We describe an engineering task, test case selection, and illustrate our problem analysis and solution discovery process. The objective of the study is to gain an understanding of to what extent IR techniques (one potential solution) can be applied to test case selection and provide decision support in a large-scale, industrial setting. We analyze, in the context of the studied company, how test case selection is performed and design a series of experiments evaluating the performance of different IR techniques. Each experiment provides lessons learned from implementation, execution, and results, feeding to its successor. The three experiments led to the following observations: 1) there is a lack of research on scalable parameter optimization of IR techniques for software engineering problems; 2) scaling IR techniques to industry data is challenging, in particular for latent semantic analysis; 3) the IR context poses constraints on the empirical evaluation of IR techniques, requiring more research on developing valid statistical approaches. We believe that our experiences in conducting a series of IR experiments with industry grade data are valuable for peer researchers so that they can avoid the pitfalls that we have encountered. Furthermore, we identified challenges that need to be addressed in order to bridge the gap between laboratory IR experiments and real applications of IR in the industry.  相似文献   
987.
Frameworks are widely used in modern software development to reduce development costs. They are accessed through their Application Programming Interfaces (APIs), which specify the contracts with client programs. When frameworks evolve, API backward-compatibility cannot always be guaranteed and client programs must upgrade to use the new releases. Because framework upgrades are not cost-free, observing API changes and usages together at fine-grained levels is necessary to help developers understand, assess, and forecast the cost of each framework upgrade. Whereas previous work studied API changes in frameworks and API usages in client programs separately, we analyse and classify API changes and usages together in 22 framework releases from the Apache and Eclipse ecosystems and their client programs. We find that (1) missing classes and methods happen more often in frameworks and affect client programs more often than the other API change types do, (2) missing interfaces occur rarely in frameworks but affect client programs often, (3) framework APIs are used on average in 35 % of client classes and interfaces, (4) most of such usages could be encapsulated locally and reduced in number, and (5) about 11 % of APIs usages could cause ripple effects in client programs when these APIs change. Based on these findings, we provide suggestions for developers and researchers to reduce the impact of API evolution through language mechanisms and design strategies.  相似文献   
988.
This paper proposes a novel gait generation method for surely achieving constraint on impact posture in limit cycle walking. First, we introduce an underactuated rimless wheel model without ankle-joint actuation and formulate a state-space realization of the control output using the stance-leg angle as a time parameter through an input–output linearization. Second, we determine a control input that moves the control output to a terminal value at a target stance-leg angle during the single-support phase. Third, we conduct numerical simulations to observe the fundamental gait properties and discuss the relationship between the gait symmetry and mechanical energy restoration. Furthermore, we mathematically prove the asymptotic stability of the generated walking gait by analytically deriving the restored mechanical energy.  相似文献   
989.
990.
Finding dense subgraphs is an important problem in graph mining and has many practical applications. At the same time, while large real-world networks are known to have many communities that are not well-separated, the majority of the existing work focuses on the problem of finding a single densest subgraph. Hence, it is natural to consider the question of finding the top-k densest subgraphs. One major challenge in addressing this question is how to handle overlaps: eliminating overlaps completely is one option, but this may lead to extracting subgraphs not as dense as it would be possible by allowing a limited amount of overlap. Furthermore, overlaps are desirable as in most real-world graphs there are vertices that belong to more than one community, and thus, to more than one densest subgraph. In this paper we study the problem of finding top-k overlapping densest subgraphs, and we present a new approach that improves over the existing techniques, both in theory and practice. First, we reformulate the problem definition in a way that we are able to obtain an algorithm with constant-factor approximation guarantee. Our approach relies on using techniques for solving the max-sum diversification problem, which however, we need to extend in order to make them applicable to our setting. Second, we evaluate our algorithm on a collection of benchmark datasets and show that it convincingly outperforms the previous methods, both in terms of quality and efficiency.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号