首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3306篇
  免费   205篇
  国内免费   16篇
电工技术   64篇
综合类   7篇
化学工业   725篇
金属工艺   79篇
机械仪表   80篇
建筑科学   116篇
矿业工程   3篇
能源动力   210篇
轻工业   300篇
水利工程   37篇
石油天然气   79篇
无线电   425篇
一般工业技术   638篇
冶金工业   115篇
原子能技术   13篇
自动化技术   636篇
  2024年   12篇
  2023年   118篇
  2022年   209篇
  2021年   273篇
  2020年   183篇
  2019年   180篇
  2018年   223篇
  2017年   168篇
  2016年   175篇
  2015年   110篇
  2014年   138篇
  2013年   269篇
  2012年   178篇
  2011年   194篇
  2010年   136篇
  2009年   137篇
  2008年   102篇
  2007年   69篇
  2006年   77篇
  2005年   70篇
  2004年   51篇
  2003年   37篇
  2002年   37篇
  2001年   31篇
  2000年   25篇
  1999年   30篇
  1998年   35篇
  1997年   20篇
  1996年   24篇
  1995年   14篇
  1994年   18篇
  1993年   22篇
  1992年   19篇
  1991年   8篇
  1990年   8篇
  1989年   15篇
  1988年   16篇
  1987年   5篇
  1986年   11篇
  1985年   11篇
  1984年   7篇
  1983年   8篇
  1982年   7篇
  1981年   7篇
  1980年   6篇
  1979年   5篇
  1978年   8篇
  1977年   4篇
  1976年   7篇
  1974年   4篇
排序方式: 共有3527条查询结果,搜索用时 15 毫秒
91.
This research proposes a machine learning approach using fuzzy logic to build an information retrieval system for the next crop rotation. In case-based reasoning systems, case representation is critical, and thus, researchers have thoroughly investigated textual, attribute-value pair, and ontological representations. As big databases result in slow case retrieval, this research suggests a fast case retrieval strategy based on an associated representation, so that, cases are interrelated in both either similar or dissimilar cases. As soon as a new case is recorded, it is compared to prior data to find a relative match. The proposed method is worked on the number of cases and retrieval accuracy between the related case representation and conventional approaches. Hierarchical Long Short-Term Memory (HLSTM) is used to evaluate the efficiency, similarity of the models, and fuzzy rules are applied to predict the environmental condition and soil quality during a particular time of the year. Based on the results, the proposed approaches allows for rapid case retrieval with high accuracy.  相似文献   
92.
The number of mobile devices accessing wireless networks is skyrocketing due to the rapid advancement of sensors and wireless communication technology. In the upcoming years, it is anticipated that mobile data traffic would rise even more. The development of a new cellular network paradigm is being driven by the Internet of Things, smart homes, and more sophisticated applications with greater data rates and latency requirements. Resources are being used up quickly due to the steady growth of smartphone devices and multimedia apps. Computation offloading to either several distant clouds or close mobile devices has consistently improved the performance of mobile devices. The computation latency can also be decreased by offloading computing duties to edge servers with a specific level of computing power. Device-to-device (D2D) collaboration can assist in processing small-scale activities that are time-sensitive in order to further reduce task delays. The task offloading performance is drastically reduced due to the variation of different performance capabilities of edge nodes. Therefore, this paper addressed this problem and proposed a new method for D2D communication. In this method, the time delay is reduced by enabling the edge nodes to exchange data samples. Simulation results show that the proposed algorithm has better performance than traditional algorithm.  相似文献   
93.
Emerging technologies such as edge computing, Internet of Things (IoT), 5G networks, big data, Artificial Intelligence (AI), and Unmanned Aerial Vehicles (UAVs) empower, Industry 4.0, with a progressive production methodology that shows attention to the interaction between machine and human beings. In the literature, various authors have focused on resolving security problems in UAV communication to provide safety for vital applications. The current research article presents a Circle Search Optimization with Deep Learning Enabled Secure UAV Classification (CSODL-SUAVC) model for Industry 4.0 environment. The suggested CSODL-SUAVC methodology is aimed at accomplishing two core objectives such as secure communication via image steganography and image classification. Primarily, the proposed CSODL-SUAVC method involves the following methods such as Multi-Level Discrete Wavelet Transformation (ML-DWT), CSO-related Optimal Pixel Selection (CSO-OPS), and signcryption-based encryption. The proposed model deploys the CSO-OPS technique to select the optimal pixel points in cover images. The secret images, encrypted by signcryption technique, are embedded into cover images. Besides, the image classification process includes three components namely, Super-Resolution using Convolution Neural Network (SRCNN), Adam optimizer, and softmax classifier. The integration of the CSO-OPS algorithm and Adam optimizer helps in achieving the maximum performance upon UAV communication. The proposed CSODL-SUAVC model was experimentally validated using benchmark datasets and the outcomes were evaluated under distinct aspects. The simulation outcomes established the supreme better performance of the CSODL-SUAVC model over recent approaches.  相似文献   
94.
Classification of electroencephalogram (EEG) signals for humans can be achieved via artificial intelligence (AI) techniques. Especially, the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions. From this perspective, an automated AI technique with a digital processing method can be used to improve these signals. This paper proposes two classifiers: long short-term memory (LSTM) and support vector machine (SVM) for the classification of seizure and non-seizure EEG signals. These classifiers are applied to a public dataset, namely the University of Bonn, which consists of 2 classes –seizure and non-seizure. In addition, a fast Walsh-Hadamard Transform (FWHT) technique is implemented to analyze the EEG signals within the recurrence space of the brain. Thus, Hadamard coefficients of the EEG signals are obtained via the FWHT. Moreover, the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings. Also, a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers. The LSTM classifier provides the best performance, with a testing accuracy of 99.00%. The training and testing loss rates for the LSTM are 0.0029 and 0.0602, respectively, while the weighted average precision, recall, and F1-score for the LSTM are 99.00%. The results of the SVM classifier in terms of accuracy, sensitivity, and specificity reached 91%, 93.52%, and 91.3%, respectively. The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s, respectively. The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals. Eventually, the proposed classifiers provide high classification accuracy compared to previously published classifiers.  相似文献   
95.
A brain tumor is a mass or growth of abnormal cells in the brain. In children and adults, brain tumor is considered one of the leading causes of death. There are several types of brain tumors, including benign (non-cancerous) and malignant (cancerous) tumors. Diagnosing brain tumors as early as possible is essential, as this can improve the chances of successful treatment and survival. Considering this problem, we bring forth a hybrid intelligent deep learning technique that uses several pre-trained models (Resnet50, Vgg16, Vgg19, U-Net) and their integration for computer-aided detection and localization systems in brain tumors. These pre-trained and integrated deep learning models have been used on the publicly available dataset from The Cancer Genome Atlas. The dataset consists of 120 patients. The pre-trained models have been used to classify tumor or no tumor images, while integrated models are applied to segment the tumor region correctly. We have evaluated their performance in terms of loss, accuracy, intersection over union, Jaccard distance, dice coefficient, and dice coefficient loss. From pre-trained models, the U-Net model achieves higher performance than other models by obtaining 95% accuracy. In contrast, U-Net with ResNet-50 outperforms all other models from integrated pre-trained models and correctly classified and segmented the tumor region.  相似文献   
96.
While scan-based compression is widely utilized in order to alleviate the test time and data volume problems,the overall compression level is dictated not only by the chain to channel ratio but also the ratio of encodable patterns.Aggressively increasing the number of scan chains in an effort to raise the compression levels may reduce the ratio of encodable patterns,degrading the overall compression level.In this paper,we present various methods to improve the ratio of encodable patterns.These methods are b...  相似文献   
97.
98.
The liquid-liquid equilibrium of polyethylene glycol dimethyl ether 2000 (PEGDME2000)+K2HPO4+H2O system has been determined experimentally at T=(298.15,303.15,308.15 and 318.15) K. The liquid-solid and complete phase diagram of this system was also obtained at T=(298.15 and 308.15) K. A nonlinear temperature dependent equation was successfully used for the correlation of the experimental binodal data. Furthermore, a temperature dependent Setschenow-type equation was successfully used for the correlation of the tie-lines of the studied system. Moreover, the effect of temperature on the binodal curves and the tie-lines for the investigated aqueous two-phase system have been studied. Also, the free energies of cloud points for this system and some previously studied systems containing PEGDME2000 were calculated from which it was concluded that the increase of the entropy is the driving force for formation of aqueous two-phase systems. Additionally, the calculated free energies for phase separation of the studied systems were used to investigate the salting-out ability of the salts having different anions. Furthermore, the complete phase diagram of the investigated system was compared with the corresponding phase diagrams of previously studied systems, in which the PEGDME2000 has been used, in order to obtain some information regarding the phase behavior of these PEGDME2000+salt+water systems.  相似文献   
99.
Network centric handover solutions for all IP wireless networks usually require modifications to network infrastructure which can stifle any potential rollout. This has led researchers to begin looking at alternative approaches. Endpoint centric handover solutions do not require network infrastructure modification, thereby alleviating a large barrier to deployment. Current endpoint centric solutions capable of meeting the delay requirements of Voice over Internet Protocol (VoIP) fail to consider the Quality of Service (QoS) that will be achieved after handoff. The main contribution of this paper is to demonstrate that QoS aware handover mechanisms which do not require network support are possible. This work proposes a Stream Control Transmission Protocol (SCTP) based handover solution for VoIP called Endpoint Centric Handover (ECHO). ECHO incorporates cross-layer metrics and the ITU-T E-Model for voice quality assessment to accurately estimate the QoS of candidate handover networks, thus facilitating a more intelligent handoff decision. An experimental testbed was developed to analyse the performance of the ECHO scheme. Results are presented showing both the accuracy of ECHO at estimating the QoS and that the addition of the QoS capabilities significantly improves the handover decisions that are made.  相似文献   
100.
World Wide Web is a continuously growing giant, and within the next few years, Web contents will surely increase tremendously. Hence, there is a great requirement to have algorithms that could accurately classify Web pages. Automatic Web page classification is significantly different from traditional text classification because of the presence of additional information, provided by the HTML structure. Recently, several techniques have been arisen from combinations of artificial intelligence and statistical approaches. However, it is not a simple matter to find an optimal classification technique for Web pages. This paper introduces a novel strategy for vertical Web page classification, which is called Classification using Multi-layered Domain Ontology (CMDO). It employs several Web mining techniques, and depends mainly on proposed multi-layered domain ontology. In order to promote the classification accuracy, CMDO implies a distiller to reject pages related to other domains. CMDO also employs a novel classification technique, which is called Graph Based Classification (GBC). The proposed GBC has pioneering features that other techniques do not have, such as outlier rejection and pruning. Experimental results have shown that CMDO outperforms recent techniques as it introduces better precision, recall, and classification accuracy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号