首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3406篇
  免费   209篇
  国内免费   16篇
电工技术   67篇
综合类   8篇
化学工业   766篇
金属工艺   80篇
机械仪表   82篇
建筑科学   116篇
矿业工程   5篇
能源动力   220篇
轻工业   308篇
水利工程   38篇
石油天然气   80篇
无线电   429篇
一般工业技术   650篇
冶金工业   128篇
原子能技术   13篇
自动化技术   641篇
  2024年   14篇
  2023年   125篇
  2022年   235篇
  2021年   280篇
  2020年   189篇
  2019年   186篇
  2018年   230篇
  2017年   171篇
  2016年   177篇
  2015年   112篇
  2014年   139篇
  2013年   278篇
  2012年   178篇
  2011年   196篇
  2010年   139篇
  2009年   137篇
  2008年   104篇
  2007年   71篇
  2006年   77篇
  2005年   71篇
  2004年   51篇
  2003年   37篇
  2002年   37篇
  2001年   31篇
  2000年   27篇
  1999年   30篇
  1998年   35篇
  1997年   21篇
  1996年   24篇
  1995年   14篇
  1994年   19篇
  1993年   23篇
  1992年   21篇
  1991年   9篇
  1990年   8篇
  1989年   15篇
  1988年   16篇
  1987年   5篇
  1986年   12篇
  1985年   11篇
  1984年   7篇
  1983年   8篇
  1982年   6篇
  1981年   8篇
  1980年   6篇
  1979年   6篇
  1978年   9篇
  1977年   5篇
  1976年   9篇
  1974年   5篇
排序方式: 共有3631条查询结果,搜索用时 11 毫秒
71.
This research proposes a machine learning approach using fuzzy logic to build an information retrieval system for the next crop rotation. In case-based reasoning systems, case representation is critical, and thus, researchers have thoroughly investigated textual, attribute-value pair, and ontological representations. As big databases result in slow case retrieval, this research suggests a fast case retrieval strategy based on an associated representation, so that, cases are interrelated in both either similar or dissimilar cases. As soon as a new case is recorded, it is compared to prior data to find a relative match. The proposed method is worked on the number of cases and retrieval accuracy between the related case representation and conventional approaches. Hierarchical Long Short-Term Memory (HLSTM) is used to evaluate the efficiency, similarity of the models, and fuzzy rules are applied to predict the environmental condition and soil quality during a particular time of the year. Based on the results, the proposed approaches allows for rapid case retrieval with high accuracy.  相似文献   
72.
Classification of electroencephalogram (EEG) signals for humans can be achieved via artificial intelligence (AI) techniques. Especially, the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions. From this perspective, an automated AI technique with a digital processing method can be used to improve these signals. This paper proposes two classifiers: long short-term memory (LSTM) and support vector machine (SVM) for the classification of seizure and non-seizure EEG signals. These classifiers are applied to a public dataset, namely the University of Bonn, which consists of 2 classes –seizure and non-seizure. In addition, a fast Walsh-Hadamard Transform (FWHT) technique is implemented to analyze the EEG signals within the recurrence space of the brain. Thus, Hadamard coefficients of the EEG signals are obtained via the FWHT. Moreover, the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings. Also, a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers. The LSTM classifier provides the best performance, with a testing accuracy of 99.00%. The training and testing loss rates for the LSTM are 0.0029 and 0.0602, respectively, while the weighted average precision, recall, and F1-score for the LSTM are 99.00%. The results of the SVM classifier in terms of accuracy, sensitivity, and specificity reached 91%, 93.52%, and 91.3%, respectively. The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s, respectively. The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals. Eventually, the proposed classifiers provide high classification accuracy compared to previously published classifiers.  相似文献   
73.
A brain tumor is a mass or growth of abnormal cells in the brain. In children and adults, brain tumor is considered one of the leading causes of death. There are several types of brain tumors, including benign (non-cancerous) and malignant (cancerous) tumors. Diagnosing brain tumors as early as possible is essential, as this can improve the chances of successful treatment and survival. Considering this problem, we bring forth a hybrid intelligent deep learning technique that uses several pre-trained models (Resnet50, Vgg16, Vgg19, U-Net) and their integration for computer-aided detection and localization systems in brain tumors. These pre-trained and integrated deep learning models have been used on the publicly available dataset from The Cancer Genome Atlas. The dataset consists of 120 patients. The pre-trained models have been used to classify tumor or no tumor images, while integrated models are applied to segment the tumor region correctly. We have evaluated their performance in terms of loss, accuracy, intersection over union, Jaccard distance, dice coefficient, and dice coefficient loss. From pre-trained models, the U-Net model achieves higher performance than other models by obtaining 95% accuracy. In contrast, U-Net with ResNet-50 outperforms all other models from integrated pre-trained models and correctly classified and segmented the tumor region.  相似文献   
74.
While scan-based compression is widely utilized in order to alleviate the test time and data volume problems,the overall compression level is dictated not only by the chain to channel ratio but also the ratio of encodable patterns.Aggressively increasing the number of scan chains in an effort to raise the compression levels may reduce the ratio of encodable patterns,degrading the overall compression level.In this paper,we present various methods to improve the ratio of encodable patterns.These methods are b...  相似文献   
75.
The liquid-liquid equilibrium of polyethylene glycol dimethyl ether 2000 (PEGDME2000)+K2HPO4+H2O system has been determined experimentally at T=(298.15,303.15,308.15 and 318.15) K. The liquid-solid and complete phase diagram of this system was also obtained at T=(298.15 and 308.15) K. A nonlinear temperature dependent equation was successfully used for the correlation of the experimental binodal data. Furthermore, a temperature dependent Setschenow-type equation was successfully used for the correlation of the tie-lines of the studied system. Moreover, the effect of temperature on the binodal curves and the tie-lines for the investigated aqueous two-phase system have been studied. Also, the free energies of cloud points for this system and some previously studied systems containing PEGDME2000 were calculated from which it was concluded that the increase of the entropy is the driving force for formation of aqueous two-phase systems. Additionally, the calculated free energies for phase separation of the studied systems were used to investigate the salting-out ability of the salts having different anions. Furthermore, the complete phase diagram of the investigated system was compared with the corresponding phase diagrams of previously studied systems, in which the PEGDME2000 has been used, in order to obtain some information regarding the phase behavior of these PEGDME2000+salt+water systems.  相似文献   
76.
Network centric handover solutions for all IP wireless networks usually require modifications to network infrastructure which can stifle any potential rollout. This has led researchers to begin looking at alternative approaches. Endpoint centric handover solutions do not require network infrastructure modification, thereby alleviating a large barrier to deployment. Current endpoint centric solutions capable of meeting the delay requirements of Voice over Internet Protocol (VoIP) fail to consider the Quality of Service (QoS) that will be achieved after handoff. The main contribution of this paper is to demonstrate that QoS aware handover mechanisms which do not require network support are possible. This work proposes a Stream Control Transmission Protocol (SCTP) based handover solution for VoIP called Endpoint Centric Handover (ECHO). ECHO incorporates cross-layer metrics and the ITU-T E-Model for voice quality assessment to accurately estimate the QoS of candidate handover networks, thus facilitating a more intelligent handoff decision. An experimental testbed was developed to analyse the performance of the ECHO scheme. Results are presented showing both the accuracy of ECHO at estimating the QoS and that the addition of the QoS capabilities significantly improves the handover decisions that are made.  相似文献   
77.
Nose tip detection in range images is a specific facial feature detection problem that is highly important for 3D face recognition. In this paper, we propose a nose tip detection method that has the following three characteristics. First, it does not require training and does not rely on any particular model. Second, it can deal with both frontal and non-frontal poses. Finally, it is quite fast, requiring only seconds to process an image of 100-200 pixels (in both x and y dimensions) with a MATLAB implementation. A complexity analysis shows that most of the computations involved in the proposed algorithm are simple. Thus, if implemented in hardware (such as a GPU implementation), the proposed method should be able to work in real time. We tested the proposed method extensively on synthetic image data rendered by a 3D head model and real data using FRGC v2.0 data set. Experimental results show that the proposed method is robust to many scenarios that are encountered in common face recognition applications (e.g., surveillance). A high detection rate of 99.43% was obtained on FRGC v2.0 data set. Furthermore, the proposed method can be used to coarsely estimate the roll, yaw, and pitch angles of the face pose.  相似文献   
78.
Clustering algorithms generally accept a parameter k from the user, which determines the number of clusters sought. However, in many application domains, like document categorization, social network clustering, and frequent pattern summarization, the proper value of k is difficult to guess. An alternative clustering formulation that does not require k is to impose a lower bound on the similarity between an object and its corresponding cluster representative. Such a formulation chooses exactly one representative for every cluster and minimizes the representative count. It has many additional benefits. For instance, it supports overlapping clusters in a natural way. Moreover, for every cluster, it selects a representative object, which can be effectively used in summarization or semi-supervised classification task. In this work, we propose an algorithm, SimClus, for clustering with lower bound on similarity. It achieves a O(log n) approximation bound on the number of clusters, whereas for the best previous algorithm the bound can be as poor as O(n). Experiments on real and synthetic data sets show that our algorithm produces more than 40% fewer representative objects, yet offers the same or better clustering quality. We also propose a dynamic variant of the algorithm, which can be effectively used in an on-line setting.  相似文献   
79.
This paper proposes a generalized least absolute deviation (GLAD) method for parameter estimation of autoregressive (AR) signals under non-Gaussian noise environments. The proposed GLAD method can improve the accuracy of the estimation of the conventional least absolute deviation (LAD) method by minimizing a new cost function with parameter variables and noise error variables. Compared with second- and high-order statistical methods, the proposed GLAD method can obtain robustly an optimal AR parameter estimation without requiring the measurement noise to be Gaussian. Moreover, the proposed GLAD method can be implemented by a cooperative neural network (NN) which is shown to converge globally to the optimal AR parameter estimation within a finite time. Simulation results show that the proposed GLAD method can obtain more accurate estimates than several well-known estimation methods in the presence of different noise distributions.  相似文献   
80.
In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号