全文获取类型
收费全文 | 111421篇 |
免费 | 3850篇 |
国内免费 | 1802篇 |
专业分类
电工技术 | 2571篇 |
综合类 | 4255篇 |
化学工业 | 15626篇 |
金属工艺 | 6309篇 |
机械仪表 | 4860篇 |
建筑科学 | 4414篇 |
矿业工程 | 1517篇 |
能源动力 | 1996篇 |
轻工业 | 5737篇 |
水利工程 | 1843篇 |
石油天然气 | 1883篇 |
武器工业 | 259篇 |
无线电 | 12571篇 |
一般工业技术 | 19619篇 |
冶金工业 | 3837篇 |
原子能技术 | 616篇 |
自动化技术 | 29160篇 |
出版年
2024年 | 164篇 |
2023年 | 636篇 |
2022年 | 1187篇 |
2021年 | 1660篇 |
2020年 | 1283篇 |
2019年 | 1067篇 |
2018年 | 15415篇 |
2017年 | 14442篇 |
2016年 | 10927篇 |
2015年 | 2106篇 |
2014年 | 1953篇 |
2013年 | 2148篇 |
2012年 | 5151篇 |
2011年 | 11350篇 |
2010年 | 9940篇 |
2009年 | 7149篇 |
2008年 | 8321篇 |
2007年 | 9182篇 |
2006年 | 1490篇 |
2005年 | 2351篇 |
2004年 | 1930篇 |
2003年 | 1752篇 |
2002年 | 1095篇 |
2001年 | 539篇 |
2000年 | 616篇 |
1999年 | 527篇 |
1998年 | 417篇 |
1997年 | 327篇 |
1996年 | 319篇 |
1995年 | 233篇 |
1994年 | 209篇 |
1993年 | 120篇 |
1992年 | 104篇 |
1991年 | 99篇 |
1990年 | 58篇 |
1989年 | 50篇 |
1988年 | 40篇 |
1987年 | 31篇 |
1968年 | 43篇 |
1967年 | 33篇 |
1966年 | 42篇 |
1965年 | 44篇 |
1963年 | 28篇 |
1960年 | 30篇 |
1959年 | 38篇 |
1958年 | 37篇 |
1957年 | 36篇 |
1956年 | 34篇 |
1955年 | 63篇 |
1954年 | 68篇 |
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
971.
Md. Monirul Hoque Gihun Song Kiok Ahn Byungyong Ryu Md. Tauhid Bin Iqbal Oksam Chae 《Multimedia Tools and Applications》2016,75(8):4259-4283
With the rapid development of visual digital media, the demand for better quality of service has increased the pressure on broadcasters to automate their error detection and restoration activities for preserving their archives. Digital dropout is one of the defects that affect archived visual materials and tends to occur in block by block basis (size of 8 × 8). It is well established that human visual system (HVS) is highly adapted to the statistics of its visual natural environment. Consequently, in this paper, we have formulated digital dropout detection as a classification problem which predicts block label based on statistical features. These statistical features are indicative of perceptual quality relevant to human visual perception, and allow pristine images to be distinguished from distorted ones. Here, the idea is to extract discriminant block statistical features based on discrete cosine transform (DCT) coefficients and determine an optimal neighborhood sampling strategy to enhance the discrimination ability of block representation. Since this spatial frame based approach is free from any motion computation dependency, it works perfectly in the presence of fast moving objects. Experiments are performed on video archives to evaluate the efficacy of the proposed technique. 相似文献
972.
Audio fingerprinting allows us to label an unidentified music fragment within a previously generated database. The use of spectral landmarks aims to obtain a robustness that lets a certain level of noise be present in the audio query. This group of audio identification algorithms holds several configuration parameters whose values are usually chosen based upon the researcher’s knowledge, previous published experimentation or just trial and error methods. In this paper we describe the whole optimisation process of a Landmark-based Music Recognition System using genetic algorithms. We define the actual structure of the algorithm as a chromosome by transforming its high relevant parameters into various genes and building up an appropriate fitness evaluation method. The optimised output parameters are used to set up a complete system that is compared with a non-optimised one by designing an unbiased evaluation model. 相似文献
973.
974.
975.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues. 相似文献
976.
Michael Unterkalmsteiner Tony Gorschek Robert Feldt Niklas Lavesson 《Empirical Software Engineering》2016,21(6):2324-2365
Software Engineering activities are information intensive. Research proposes Information Retrieval (IR) techniques to support engineers in their daily tasks, such as establishing and maintaining traceability links, fault identification, and software maintenance. We describe an engineering task, test case selection, and illustrate our problem analysis and solution discovery process. The objective of the study is to gain an understanding of to what extent IR techniques (one potential solution) can be applied to test case selection and provide decision support in a large-scale, industrial setting. We analyze, in the context of the studied company, how test case selection is performed and design a series of experiments evaluating the performance of different IR techniques. Each experiment provides lessons learned from implementation, execution, and results, feeding to its successor. The three experiments led to the following observations: 1) there is a lack of research on scalable parameter optimization of IR techniques for software engineering problems; 2) scaling IR techniques to industry data is challenging, in particular for latent semantic analysis; 3) the IR context poses constraints on the empirical evaluation of IR techniques, requiring more research on developing valid statistical approaches. We believe that our experiences in conducting a series of IR experiments with industry grade data are valuable for peer researchers so that they can avoid the pitfalls that we have encountered. Furthermore, we identified challenges that need to be addressed in order to bridge the gap between laboratory IR experiments and real applications of IR in the industry. 相似文献
977.
Fumihiko Asano 《Multibody System Dynamics》2016,37(2):227-244
This paper proposes a novel gait generation method for surely achieving constraint on impact posture in limit cycle walking. First, we introduce an underactuated rimless wheel model without ankle-joint actuation and formulate a state-space realization of the control output using the stance-leg angle as a time parameter through an input–output linearization. Second, we determine a control input that moves the control output to a terminal value at a target stance-leg angle during the single-support phase. Third, we conduct numerical simulations to observe the fundamental gait properties and discuss the relationship between the gait symmetry and mechanical energy restoration. Furthermore, we mathematically prove the asymptotic stability of the generated walking gait by analytically deriving the restored mechanical energy. 相似文献
978.
Esther Galbrun Aristides Gionis Nikolaj Tatti 《Data mining and knowledge discovery》2016,30(5):1134-1165
Finding dense subgraphs is an important problem in graph mining and has many practical applications. At the same time, while large real-world networks are known to have many communities that are not well-separated, the majority of the existing work focuses on the problem of finding a single densest subgraph. Hence, it is natural to consider the question of finding the top-k densest subgraphs. One major challenge in addressing this question is how to handle overlaps: eliminating overlaps completely is one option, but this may lead to extracting subgraphs not as dense as it would be possible by allowing a limited amount of overlap. Furthermore, overlaps are desirable as in most real-world graphs there are vertices that belong to more than one community, and thus, to more than one densest subgraph. In this paper we study the problem of finding top-k overlapping densest subgraphs, and we present a new approach that improves over the existing techniques, both in theory and practice. First, we reformulate the problem definition in a way that we are able to obtain an algorithm with constant-factor approximation guarantee. Our approach relies on using techniques for solving the max-sum diversification problem, which however, we need to extend in order to make them applicable to our setting. Second, we evaluate our algorithm on a collection of benchmark datasets and show that it convincingly outperforms the previous methods, both in terms of quality and efficiency. 相似文献
979.
One issue in the dynamic simulation of flexible multibody system is poor computation efficiency, which is due to high frequency components in the solution associated with a deformable body. Standard explicit numerical methods should take very small time steps in order to satisfy the absolute stability condition for the high frequency components and, in turn, the computational efficiency deteriorates. In this study, a hybrid integration scheme is applied to solve the equations of motion of a flexible multibody system for achieving better computational efficiency. The computation times and simulation results are compared between the hybrid scheme and conventional methods. The results demonstrate that the efficiency of a flexible multibody simulation can be improved by using the hybrid scheme. 相似文献
980.
Bingbing Nie Taewung Kim Yan Wang Varun Bollapragada Tom Daniel Jeff R. Crandall 《Multibody System Dynamics》2016,38(3):297-316
Dimensional scaling approaches are widely used to develop multi-body human models in injury biomechanics research. Given the limited experimental data for any particular anthropometry, a validated model can be scaled to different sizes to reflect the biological variance of population and used to characterize the human response. This paper compares two scaling approaches at the whole-body level: one is the conventional mass-based scaling approach which assumes geometric similarity; the other is the structure-based approach which assumes additional structural similarity by using idealized mechanical models to account for the specific anatomy and expected loading conditions. Given the use of exterior body dimensions and a uniform Young’s modulus, the two approaches showed close values of the scaling factors for most body regions, with 1.5 % difference on force scaling factors and 13.5 % difference on moment scaling factors, on average. One exception was on the thoracic modeling, with 19.3 % difference on the scaling factor of the deflection. Two 6-year-old child models were generated from a baseline adult model as application example and were evaluated using recent biomechanical data from cadaveric pediatric experiments. The scaled models predicted similar impact responses of the thorax and lower extremity, which were within the experimental corridors; and suggested further consideration of age-specific structural change of the pelvis. Towards improved scaling methods to develop biofidelic human models, this comparative analysis suggests further investigation on interior anatomical geometry and detailed biological material properties associated with the demographic range of the population. 相似文献