全文获取类型
收费全文 | 3135篇 |
免费 | 158篇 |
国内免费 | 24篇 |
专业分类
电工技术 | 46篇 |
综合类 | 5篇 |
化学工业 | 790篇 |
金属工艺 | 90篇 |
机械仪表 | 99篇 |
建筑科学 | 101篇 |
矿业工程 | 8篇 |
能源动力 | 403篇 |
轻工业 | 285篇 |
水利工程 | 16篇 |
石油天然气 | 60篇 |
无线电 | 258篇 |
一般工业技术 | 571篇 |
冶金工业 | 152篇 |
原子能技术 | 14篇 |
自动化技术 | 419篇 |
出版年
2024年 | 9篇 |
2023年 | 65篇 |
2022年 | 144篇 |
2021年 | 189篇 |
2020年 | 127篇 |
2019年 | 113篇 |
2018年 | 190篇 |
2017年 | 135篇 |
2016年 | 154篇 |
2015年 | 92篇 |
2014年 | 144篇 |
2013年 | 259篇 |
2012年 | 164篇 |
2011年 | 201篇 |
2010年 | 130篇 |
2009年 | 144篇 |
2008年 | 106篇 |
2007年 | 95篇 |
2006年 | 98篇 |
2005年 | 45篇 |
2004年 | 59篇 |
2003年 | 63篇 |
2002年 | 43篇 |
2001年 | 35篇 |
2000年 | 31篇 |
1999年 | 33篇 |
1998年 | 52篇 |
1997年 | 33篇 |
1996年 | 29篇 |
1995年 | 39篇 |
1994年 | 18篇 |
1993年 | 25篇 |
1992年 | 19篇 |
1991年 | 18篇 |
1990年 | 13篇 |
1989年 | 19篇 |
1988年 | 12篇 |
1987年 | 11篇 |
1986年 | 10篇 |
1985年 | 21篇 |
1984年 | 16篇 |
1983年 | 8篇 |
1982年 | 15篇 |
1981年 | 12篇 |
1980年 | 13篇 |
1979年 | 9篇 |
1977年 | 7篇 |
1975年 | 7篇 |
1974年 | 6篇 |
1973年 | 11篇 |
排序方式: 共有3317条查询结果,搜索用时 10 毫秒
41.
Emad Shihab Akinori Ihara Yasutaka Kamei Walid M. Ibrahim Masao Ohira Bram Adams Ahmed E. Hassan Ken-ichi Matsumoto 《Empirical Software Engineering》2013,18(5):1005-1042
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs. 相似文献
42.
Adil Fahad Zahir Tari Ibrahim Khalil Ibrahim Habib Hussein Alnuweiri 《Computer Networks》2013,57(9):2040-2057
There is significant interest in the network management and industrial security community about the need to identify the “best” and most relevant features for network traffic in order to properly characterize user behaviour and predict future traffic. The ability to eliminate redundant features is an important Machine Learning (ML) task because it helps to identify the best features in order to improve the classification accuracy as well as to reduce the computational complexity related to the construction of the classifier. In practice, feature selection (FS) techniques can be used as a preprocessing step to eliminate irrelevant features and as a knowledge discovery tool to reveal the “best” features in many soft computing applications. In this paper, we investigate the advantages and disadvantages of such FS techniques with new proposed metrics (namely goodness, stability and similarity). We continue our efforts toward developing an integrated FS technique that is built on the key strengths of existing FS techniques. A novel way is proposed to identify efficiently and accurately the “best” features by first combining the results of some well-known FS techniques to find consistent features, and then use the proposed concept of support to select a smallest set of features and cover data optimality. The empirical study over ten high-dimensional network traffic data sets demonstrates significant gain in accuracy and improved run-time performance of a classifier compared to individual results produced by some well-known FS techniques. 相似文献
43.
44.
Khaled M. Alalayah Fatma S. Alrayes Jaber S. Alzahrani Khadija M. Alaidarous Ibrahim M. Alwayle Heba Mohsen Ibrahim Abdulrab Ahmed Mesfer Al Duhayyim 《计算机系统科学与工程》2023,46(3):3121-3139
With the increased advancements of smart industries, cybersecurity has become a vital growth factor in the success of industrial transformation. The Industrial Internet of Things (IIoT) or Industry 4.0 has revolutionized the concepts of manufacturing and production altogether. In industry 4.0, powerful Intrusion Detection Systems (IDS) play a significant role in ensuring network security. Though various intrusion detection techniques have been developed so far, it is challenging to protect the intricate data of networks. This is because conventional Machine Learning (ML) approaches are inadequate and insufficient to address the demands of dynamic IIoT networks. Further, the existing Deep Learning (DL) can be employed to identify anonymous intrusions. Therefore, the current study proposes a Hunger Games Search Optimization with Deep Learning-Driven Intrusion Detection (HGSODL-ID) model for the IIoT environment. The presented HGSODL-ID model exploits the linear normalization approach to transform the input data into a useful format. The HGSO algorithm is employed for Feature Selection (HGSO-FS) to reduce the curse of dimensionality. Moreover, Sparrow Search Optimization (SSO) is utilized with a Graph Convolutional Network (GCN) to classify and identify intrusions in the network. Finally, the SSO technique is exploited to fine-tune the hyper-parameters involved in the GCN model. The proposed HGSODL-ID model was experimentally validated using a benchmark dataset, and the results confirmed the superiority of the proposed HGSODL-ID method over recent approaches. 相似文献
45.
Skin lesions have become a critical illness worldwide, and the earlier identification of skin lesions using dermoscopic images can raise the survival rate. Classification of the skin lesion from those dermoscopic images will be a tedious task. The accuracy of the classification of skin lesions is improved by the use of deep learning models. Recently, convolutional neural networks (CNN) have been established in this domain, and their techniques are extremely established for feature extraction, leading to enhanced classification. With this motivation, this study focuses on the design of artificial intelligence (AI) based solutions, particularly deep learning (DL) algorithms, to distinguish malignant skin lesions from benign lesions in dermoscopic images. This study presents an automated skin lesion detection and classification technique utilizing optimized stacked sparse autoencoder (OSSAE) based feature extractor with backpropagation neural network (BPNN), named the OSSAE-BPNN technique. The proposed technique contains a multi-level thresholding based segmentation technique for detecting the affected lesion region. In addition, the OSSAE based feature extractor and BPNN based classifier are employed for skin lesion diagnosis. Moreover, the parameter tuning of the SSAE model is carried out by the use of sea gull optimization (SGO) algorithm. To showcase the enhanced outcomes of the OSSAE-BPNN model, a comprehensive experimental analysis is performed on the benchmark dataset. The experimental findings demonstrated that the OSSAE-BPNN approach outperformed other current strategies in terms of several assessment metrics. 相似文献
46.
47.
Timonacic acid (TCA) was successfully labeled with 99m Tc. The influence exerted on the reaction by the substrate and reducing agent concentrations, pH of the reaction mixture, and reaction time was examined, and in vitro stability of 99m Tc-TCA was evaluated. The maximum labeling yield was 98.5 ± 0.6%. The complex was stable throughout the working period (6 h). A study of in-vivo biodistribution in mice showed that the maximum uptake of 99m Tc-TCA in the liver was 22.3 ± 0.3% of the injected activity per gram of the tissue or organ (% ID/g) at 30 min post injection. The clearance from the mice appeared to proceed via the circulation mainly through the kidneys and urine (approximately 56% of the injected dose at 1 h after injection). The liver uptake of 99m Tc-TCA is higher than that of 99m Tc-UDCA (ursodeoxycholic acid); therefore, 99m Tc-TCA shows more promise for liver SPECT. 相似文献
48.
In this paper, the effect of mass diffusion in a thermoelastic nanoscale beam in context Lord and Shulman theory is studied. The analytical solution in the Laplace domain is obtained for lateral deflection, temperature, displacement, concentration, stress and chemical potential. The both ends of the nanoscale beam are simply supported. The basic equations have been written in the form of a vector-matrix differential equation in the Laplace transform domain, which is then solved by an eigenvalue approach. The results obtained are presented graphically for the effect of time and mass diffusion to display the phenomena physical meaning. 相似文献
49.
Ahmed Abdu Alattab Mohammed Eid Ibrahim Reyazur Rashid Irshad Anwar Ali Yahya Amin A. Al-Awady 《计算机、材料和连续体(英文)》2023,74(2):2397-2412
This research proposes a machine learning approach using fuzzy logic to build an information retrieval system for the next crop rotation. In case-based reasoning systems, case representation is critical, and thus, researchers have thoroughly investigated textual, attribute-value pair, and ontological representations. As big databases result in slow case retrieval, this research suggests a fast case retrieval strategy based on an associated representation, so that, cases are interrelated in both either similar or dissimilar cases. As soon as a new case is recorded, it is compared to prior data to find a relative match. The proposed method is worked on the number of cases and retrieval accuracy between the related case representation and conventional approaches. Hierarchical Long Short-Term Memory (HLSTM) is used to evaluate the efficiency, similarity of the models, and fuzzy rules are applied to predict the environmental condition and soil quality during a particular time of the year. Based on the results, the proposed approaches allows for rapid case retrieval with high accuracy. 相似文献
50.
Variational method (VM) is employed to derive the co-state equations, boundary (transversality) conditions, and functional sensitivity derivatives. The converged solutions of the state equations together with the steady state solution of the co-state equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational method to aerodynamic shape optimization problems is demonstrated on internal flow problems at supersonic Mach number range of 1.5. Optimization results for flows with and without shock phenomena are presented. The study shows that while maintaining the accuracy of aerodynamical objective function and constraint within the reasonable range for engineering prediction purposes, variational method provides a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference computations. 相似文献