首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1707篇
  免费   121篇
  国内免费   16篇
电工技术   43篇
综合类   8篇
化学工业   473篇
金属工艺   47篇
机械仪表   79篇
建筑科学   50篇
矿业工程   3篇
能源动力   94篇
轻工业   154篇
水利工程   38篇
石油天然气   21篇
武器工业   2篇
无线电   132篇
一般工业技术   310篇
冶金工业   69篇
原子能技术   17篇
自动化技术   304篇
  2024年   4篇
  2023年   56篇
  2022年   86篇
  2021年   129篇
  2020年   111篇
  2019年   126篇
  2018年   136篇
  2017年   115篇
  2016年   117篇
  2015年   81篇
  2014年   94篇
  2013年   158篇
  2012年   122篇
  2011年   99篇
  2010年   75篇
  2009年   76篇
  2008年   45篇
  2007年   41篇
  2006年   29篇
  2005年   11篇
  2004年   9篇
  2003年   11篇
  2002年   10篇
  2001年   2篇
  2000年   7篇
  1999年   9篇
  1998年   19篇
  1997年   9篇
  1996年   8篇
  1995年   8篇
  1994年   10篇
  1993年   4篇
  1992年   5篇
  1991年   4篇
  1990年   2篇
  1987年   3篇
  1986年   1篇
  1984年   5篇
  1983年   1篇
  1982年   1篇
  1981年   3篇
  1980年   1篇
  1968年   1篇
排序方式: 共有1844条查询结果,搜索用时 15 毫秒
41.
With the increasing and rapid growth rate of COVID-19 cases, the healthcare scheme of several developed countries have reached the point of collapse. An important and critical steps in fighting against COVID-19 is powerful screening of diseased patients, in such a way that positive patient can be treated and isolated. A chest radiology image-based diagnosis scheme might have several benefits over traditional approach. The accomplishment of artificial intelligence (AI) based techniques in automated diagnoses in the healthcare sector and rapid increase in COVID-19 cases have demanded the requirement of AI based automated diagnosis and recognition systems. This study develops an Intelligent Firefly Algorithm Deep Transfer Learning Based COVID-19 Monitoring System (IFFA-DTLMS). The proposed IFFA-DTLMS model majorly aims at identifying and categorizing the occurrence of COVID19 on chest radiographs. To attain this, the presented IFFA-DTLMS model primarily applies densely connected networks (DenseNet121) model to generate a collection of feature vectors. In addition, the firefly algorithm (FFA) is applied for the hyper parameter optimization of DenseNet121 model. Moreover, autoencoder-long short term memory (AE-LSTM) model is exploited for the classification and identification of COVID19. For ensuring the enhanced performance of the IFFA-DTLMS model, a wide-ranging experiments were performed and the results are reviewed under distinctive aspects. The experimental value reports the betterment of IFFA-DTLMS model over recent approaches.  相似文献   
42.
One of the most pressing concerns for the consumer market is the detection of adulteration in meat products due to their preciousness. The rapid and accurate identification mechanism for lard adulteration in meat products is highly necessary, for developing a mechanism trusted by consumers and that can be used to make a definitive diagnosis. Fourier Transform Infrared Spectroscopy (FTIR) is used in this work to identify lard adulteration in cow, lamb, and chicken samples. A simplified extraction method was implied to obtain the lipids from pure and adulterated meat. Adulterated samples were obtained by mixing lard with chicken, lamb, and beef with different concentrations (10%–50% v/v). Principal component analysis (PCA) and partial least square (PLS) were used to develop a calibration model at 800–3500 cm−1. Three-dimension PCA was successfully used by dividing the spectrum in three regions to classify lard meat adulteration in chicken, lamb, and beef samples. The corresponding FTIR peaks for the lard have been observed at 1159.6, 1743.4, 2853.1, and 2922.5 cm−1, which differentiate chicken, lamb, and beef samples. The wavenumbers offer the highest determination coefficient R2 value of 0.846 and lowest root mean square error of calibration (RMSEC) and root mean square error prediction (RMSEP) with an accuracy of 84.6%. Even the tiniest fat adulteration up to 10% can be reliably discovered using this methodology.  相似文献   
43.
Classification of electroencephalogram (EEG) signals for humans can be achieved via artificial intelligence (AI) techniques. Especially, the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions. From this perspective, an automated AI technique with a digital processing method can be used to improve these signals. This paper proposes two classifiers: long short-term memory (LSTM) and support vector machine (SVM) for the classification of seizure and non-seizure EEG signals. These classifiers are applied to a public dataset, namely the University of Bonn, which consists of 2 classes –seizure and non-seizure. In addition, a fast Walsh-Hadamard Transform (FWHT) technique is implemented to analyze the EEG signals within the recurrence space of the brain. Thus, Hadamard coefficients of the EEG signals are obtained via the FWHT. Moreover, the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings. Also, a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers. The LSTM classifier provides the best performance, with a testing accuracy of 99.00%. The training and testing loss rates for the LSTM are 0.0029 and 0.0602, respectively, while the weighted average precision, recall, and F1-score for the LSTM are 99.00%. The results of the SVM classifier in terms of accuracy, sensitivity, and specificity reached 91%, 93.52%, and 91.3%, respectively. The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s, respectively. The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals. Eventually, the proposed classifiers provide high classification accuracy compared to previously published classifiers.  相似文献   
44.
Magnetic resonance imaging (MRI) brain tumor segmentation is a crucial task for clinical treatment. However, it is challenging owing to variations in type, size, and location of tumors. In addition, anatomical variation in individuals, intensity non-uniformity, and noises adversely affect brain tumor segmentation. To address these challenges, an automatic region-based brain tumor segmentation approach is presented in this paper which combines fuzzy shape prior term and deep learning. We define a new energy function in which an Adaptively Regularized Kernel-Based Fuzzy C-Means (ARKFCM) Clustering algorithm is utilized for inferring the shape of the tumor to be embedded into the level set method. In this way, some shortcomings of traditional level set methods such as contour leakage and shrinkage have been eliminated. Moreover, a fully automated method is achieved by using U-Net to obtain the initial contour, reducing sensitivity to initial contour selection. The proposed method is validated on the BraTS 2017 benchmark dataset for brain tumor segmentation. Average values of Dice, Jaccard, Sensitivity and specificity are 0.93 ± 0.03, 0.86 ± 0.06, 0.95 ± 0.04, and 0.99 ± 0.003, respectively. Experimental results indicate that the proposed method outperforms the other state-of-the-art methods in brain tumor segmentation.  相似文献   
45.
46.
In this paper, adaptive robust control of uncertain systems with multiple time delays in states and input is considered. It is assumed that the parameter uncertainties are time varying norm-bounded whose bounds are unknown but their functional properties are known. To overcome the effect of input delay on the closed loop system stability, new Lyapunov Krasovskii functional will be introduced. It is shown that the proposed adaptive robust controller guarantees globally uniformly exponentially convergence of all system solutions to a ball with any certain convergence rate. Moreover, if there is no disturbance in the system, asymptotic stability of the closed loop system will be established. The proposed design condition is formulated in terms of linear matrix inequality (LMI) which can be easily solved by LMI Toolbox in Matlab. Finally, an illustrative example is included to show the effectiveness of results developed in this paper.  相似文献   
47.
Multiple Sequences Alignment (MSA) of biological sequences is a fundamental problem in computational biology due to its critical significance in wide ranging applications including haplotype reconstruction, sequence homology, phylogenetic analysis, and prediction of evolutionary origins. The MSA problem is considered NP-hard and known heuristics for the problem do not scale well with increasing numbers of sequences. On the other hand, with the advent of a new breed of fast sequencing techniques it is now possible to generate thousands of sequences very quickly. For rapid sequence analysis, it is therefore desirable to develop fast MSA algorithms that scale well with an increase in the dataset size. In this paper, we present a novel domain decomposition based technique to solve the MSA problem on multiprocessing platforms. The domain decomposition based technique, in addition to yielding better quality, gives enormous advantages in terms of execution time and memory requirements. The proposed strategy allows one to decrease the time complexity of any known heuristic of O(N)xO(N)x complexity by a factor of O(1/p)xO(1/p)x, where NN is the number of sequences, xx depends on the underlying heuristic approach, and pp is the number of processing nodes. In particular, we propose a highly scalable algorithm, Sample-Align-D, for aligning biological sequences using Muscle system as the underlying heuristic. The proposed algorithm has been implemented on a cluster of workstations using the MPI library. Experimental results for different problem sizes are analyzed in terms of quality of alignment, execution time and speed-up.  相似文献   
48.
Wireless multihop network is currently attracting much attention as a new wireless broadband access technology due to numerous benefits. This work proposes a power control scheme for WiMAX multihop relay system. In contrast to existing power control and optimization approaches, our proposed method uses an adaptive Channel Quality Measurement for a relay station to reduce interferences to other mobile station (MS) or relay station (RS) within the same cell and hence increase the number of hops per link and consequently maximize the spatial reuse. The proposed power control is applied to a new dynamic HARQ algorithm for adaptive channel quality enhancement. Simulation results have indicated that the proposed approach achieves superior BER/PER performance enhancement in comparison to previous related works.  相似文献   
49.
ABSTRACT

The effect of 2D and 3D educational content learning on memory has been studied using electroencephalography (EEG) brain signal. A hypothesis is set that the 3D materials are better than the 2D materials for learning and memory recall. To test the hypothesis, we proposed a classification system that will predict true or false recall for short-term memory (STM) and long-term memory (LTM) after learning by either 2D or 3D educational contents. For this purpose, EEG brain signals are recorded during learning and testing; the signals are then analysed in the time domain using different types of features in various frequency bands. The features are then fed into a support vector machine (SVM)-based classifier. The experimental results indicate that the learning and memory recall using 2D and 3D contents do not have significant differences for both the STM and the LTM.  相似文献   
50.
Clustering algorithms generally accept a parameter k from the user, which determines the number of clusters sought. However, in many application domains, like document categorization, social network clustering, and frequent pattern summarization, the proper value of k is difficult to guess. An alternative clustering formulation that does not require k is to impose a lower bound on the similarity between an object and its corresponding cluster representative. Such a formulation chooses exactly one representative for every cluster and minimizes the representative count. It has many additional benefits. For instance, it supports overlapping clusters in a natural way. Moreover, for every cluster, it selects a representative object, which can be effectively used in summarization or semi-supervised classification task. In this work, we propose an algorithm, SimClus, for clustering with lower bound on similarity. It achieves a O(log n) approximation bound on the number of clusters, whereas for the best previous algorithm the bound can be as poor as O(n). Experiments on real and synthetic data sets show that our algorithm produces more than 40% fewer representative objects, yet offers the same or better clustering quality. We also propose a dynamic variant of the algorithm, which can be effectively used in an on-line setting.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号