首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5667篇
  免费   375篇
  国内免费   21篇
电工技术   84篇
综合类   11篇
化学工业   1424篇
金属工艺   146篇
机械仪表   155篇
建筑科学   189篇
矿业工程   4篇
能源动力   394篇
轻工业   621篇
水利工程   65篇
石油天然气   93篇
武器工业   1篇
无线电   634篇
一般工业技术   1114篇
冶金工业   168篇
原子能技术   30篇
自动化技术   930篇
  2024年   24篇
  2023年   172篇
  2022年   333篇
  2021年   452篇
  2020年   301篇
  2019年   314篇
  2018年   351篇
  2017年   301篇
  2016年   316篇
  2015年   214篇
  2014年   293篇
  2013年   500篇
  2012年   332篇
  2011年   398篇
  2010年   261篇
  2009年   234篇
  2008年   165篇
  2007年   125篇
  2006年   122篇
  2005年   92篇
  2004年   69篇
  2003年   64篇
  2002年   59篇
  2001年   50篇
  2000年   45篇
  1999年   42篇
  1998年   46篇
  1997年   30篇
  1996年   36篇
  1995年   20篇
  1994年   27篇
  1993年   29篇
  1992年   21篇
  1991年   14篇
  1990年   14篇
  1989年   18篇
  1988年   23篇
  1987年   9篇
  1986年   17篇
  1985年   17篇
  1984年   12篇
  1983年   13篇
  1982年   12篇
  1981年   12篇
  1980年   6篇
  1979年   7篇
  1978年   11篇
  1977年   9篇
  1976年   9篇
  1974年   8篇
排序方式: 共有6063条查询结果,搜索用时 0 毫秒
121.
The number of mobile devices accessing wireless networks is skyrocketing due to the rapid advancement of sensors and wireless communication technology. In the upcoming years, it is anticipated that mobile data traffic would rise even more. The development of a new cellular network paradigm is being driven by the Internet of Things, smart homes, and more sophisticated applications with greater data rates and latency requirements. Resources are being used up quickly due to the steady growth of smartphone devices and multimedia apps. Computation offloading to either several distant clouds or close mobile devices has consistently improved the performance of mobile devices. The computation latency can also be decreased by offloading computing duties to edge servers with a specific level of computing power. Device-to-device (D2D) collaboration can assist in processing small-scale activities that are time-sensitive in order to further reduce task delays. The task offloading performance is drastically reduced due to the variation of different performance capabilities of edge nodes. Therefore, this paper addressed this problem and proposed a new method for D2D communication. In this method, the time delay is reduced by enabling the edge nodes to exchange data samples. Simulation results show that the proposed algorithm has better performance than traditional algorithm.  相似文献   
122.
Emerging technologies such as edge computing, Internet of Things (IoT), 5G networks, big data, Artificial Intelligence (AI), and Unmanned Aerial Vehicles (UAVs) empower, Industry 4.0, with a progressive production methodology that shows attention to the interaction between machine and human beings. In the literature, various authors have focused on resolving security problems in UAV communication to provide safety for vital applications. The current research article presents a Circle Search Optimization with Deep Learning Enabled Secure UAV Classification (CSODL-SUAVC) model for Industry 4.0 environment. The suggested CSODL-SUAVC methodology is aimed at accomplishing two core objectives such as secure communication via image steganography and image classification. Primarily, the proposed CSODL-SUAVC method involves the following methods such as Multi-Level Discrete Wavelet Transformation (ML-DWT), CSO-related Optimal Pixel Selection (CSO-OPS), and signcryption-based encryption. The proposed model deploys the CSO-OPS technique to select the optimal pixel points in cover images. The secret images, encrypted by signcryption technique, are embedded into cover images. Besides, the image classification process includes three components namely, Super-Resolution using Convolution Neural Network (SRCNN), Adam optimizer, and softmax classifier. The integration of the CSO-OPS algorithm and Adam optimizer helps in achieving the maximum performance upon UAV communication. The proposed CSODL-SUAVC model was experimentally validated using benchmark datasets and the outcomes were evaluated under distinct aspects. The simulation outcomes established the supreme better performance of the CSODL-SUAVC model over recent approaches.  相似文献   
123.
One of the most pressing concerns for the consumer market is the detection of adulteration in meat products due to their preciousness. The rapid and accurate identification mechanism for lard adulteration in meat products is highly necessary, for developing a mechanism trusted by consumers and that can be used to make a definitive diagnosis. Fourier Transform Infrared Spectroscopy (FTIR) is used in this work to identify lard adulteration in cow, lamb, and chicken samples. A simplified extraction method was implied to obtain the lipids from pure and adulterated meat. Adulterated samples were obtained by mixing lard with chicken, lamb, and beef with different concentrations (10%–50% v/v). Principal component analysis (PCA) and partial least square (PLS) were used to develop a calibration model at 800–3500 cm−1. Three-dimension PCA was successfully used by dividing the spectrum in three regions to classify lard meat adulteration in chicken, lamb, and beef samples. The corresponding FTIR peaks for the lard have been observed at 1159.6, 1743.4, 2853.1, and 2922.5 cm−1, which differentiate chicken, lamb, and beef samples. The wavenumbers offer the highest determination coefficient R2 value of 0.846 and lowest root mean square error of calibration (RMSEC) and root mean square error prediction (RMSEP) with an accuracy of 84.6%. Even the tiniest fat adulteration up to 10% can be reliably discovered using this methodology.  相似文献   
124.
Classification of electroencephalogram (EEG) signals for humans can be achieved via artificial intelligence (AI) techniques. Especially, the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions. From this perspective, an automated AI technique with a digital processing method can be used to improve these signals. This paper proposes two classifiers: long short-term memory (LSTM) and support vector machine (SVM) for the classification of seizure and non-seizure EEG signals. These classifiers are applied to a public dataset, namely the University of Bonn, which consists of 2 classes –seizure and non-seizure. In addition, a fast Walsh-Hadamard Transform (FWHT) technique is implemented to analyze the EEG signals within the recurrence space of the brain. Thus, Hadamard coefficients of the EEG signals are obtained via the FWHT. Moreover, the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings. Also, a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers. The LSTM classifier provides the best performance, with a testing accuracy of 99.00%. The training and testing loss rates for the LSTM are 0.0029 and 0.0602, respectively, while the weighted average precision, recall, and F1-score for the LSTM are 99.00%. The results of the SVM classifier in terms of accuracy, sensitivity, and specificity reached 91%, 93.52%, and 91.3%, respectively. The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s, respectively. The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals. Eventually, the proposed classifiers provide high classification accuracy compared to previously published classifiers.  相似文献   
125.
A brain tumor is a mass or growth of abnormal cells in the brain. In children and adults, brain tumor is considered one of the leading causes of death. There are several types of brain tumors, including benign (non-cancerous) and malignant (cancerous) tumors. Diagnosing brain tumors as early as possible is essential, as this can improve the chances of successful treatment and survival. Considering this problem, we bring forth a hybrid intelligent deep learning technique that uses several pre-trained models (Resnet50, Vgg16, Vgg19, U-Net) and their integration for computer-aided detection and localization systems in brain tumors. These pre-trained and integrated deep learning models have been used on the publicly available dataset from The Cancer Genome Atlas. The dataset consists of 120 patients. The pre-trained models have been used to classify tumor or no tumor images, while integrated models are applied to segment the tumor region correctly. We have evaluated their performance in terms of loss, accuracy, intersection over union, Jaccard distance, dice coefficient, and dice coefficient loss. From pre-trained models, the U-Net model achieves higher performance than other models by obtaining 95% accuracy. In contrast, U-Net with ResNet-50 outperforms all other models from integrated pre-trained models and correctly classified and segmented the tumor region.  相似文献   
126.
While scan-based compression is widely utilized in order to alleviate the test time and data volume problems,the overall compression level is dictated not only by the chain to channel ratio but also the ratio of encodable patterns.Aggressively increasing the number of scan chains in an effort to raise the compression levels may reduce the ratio of encodable patterns,degrading the overall compression level.In this paper,we present various methods to improve the ratio of encodable patterns.These methods are b...  相似文献   
127.
In this paper, we present general formulae for the mask of (2b + 4)-point n-ary approximating as well as interpolating subdivision schemes for any integers ${b\,\geqslant\,0}$ and ${n\,\geqslant\,2}$ . These formulae corresponding to the mask not only generalize and unify several well-known schemes but also provide the mask of higher arity schemes. Moreover, the 4-point and 6-point a-ary schemes introduced by Lian [Appl Appl Math Int J 3(1):18–29, 2008] are special cases of our general formulae.  相似文献   
128.
129.
High-mix-low-volume (HMLV) production is currently a worldwide manufacturing trend. It requires a high degree of customization in the manufacturing process to produce a wide range of products in low quantity in order to meet customers' demand for more variety and choices of products. Such a kind of business environment has increased the conversion time and decreased the production efficiency due to frequent production changeover. In this paper, a layered-encoding cascade optimization (LECO) approach is proposed to develop an HMLV product-mix optimizer that exhibits the benefits of low conversion time, high productivity, and high equipment efficiency. Specifically, the genetic algorithm (GA) and particle swarm optimization (PSO) techniques are employed as optimizers for different decision layers in different LECO models. Each GA and PSO optimizer is studied and compared. A number of hypothetical and real data sets from a manufacturing plant are used to evaluate the performance of the proposed GA and PSO optimizers. The results indicate that, with a proper selection of the GA and PSO optimizers, the LECO approach is able to generate high-quality product-mix plans to meet the production demands in HMLV manufacturing environments.  相似文献   
130.
Multimedia analysis and reuse of raw un-edited audio visual content known as rushes is gaining acceptance by a large number of research labs and companies. A set of research projects are considering multimedia indexing, annotation, search and retrieval in the context of European funded research, but only the FP6 project RUSHES is focusing on automatic semantic annotation, indexing and retrieval of raw and un-edited audio-visual content. Even professional content creators and providers as well as home-users are dealing with this type of content and therefore novel technologies for semantic search and retrieval are required. In this paper, we present a summary of the most relevant achievements of the RUSHES project, focusing on specific approaches for automatic annotation as well as the main features of the final RUSHES search engine.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号