首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7725篇
  免费   350篇
  国内免费   49篇
电工技术   214篇
综合类   20篇
化学工业   1651篇
金属工艺   159篇
机械仪表   183篇
建筑科学   238篇
矿业工程   8篇
能源动力   519篇
轻工业   794篇
水利工程   89篇
石油天然气   153篇
武器工业   4篇
无线电   948篇
一般工业技术   1368篇
冶金工业   450篇
原子能技术   77篇
自动化技术   1249篇
  2024年   17篇
  2023年   184篇
  2022年   427篇
  2021年   541篇
  2020年   380篇
  2019年   385篇
  2018年   489篇
  2017年   346篇
  2016年   382篇
  2015年   235篇
  2014年   359篇
  2013年   598篇
  2012年   391篇
  2011年   442篇
  2010年   282篇
  2009年   240篇
  2008年   227篇
  2007年   209篇
  2006年   172篇
  2005年   152篇
  2004年   124篇
  2003年   102篇
  2002年   117篇
  2001年   62篇
  2000年   65篇
  1999年   76篇
  1998年   123篇
  1997年   107篇
  1996年   71篇
  1995年   78篇
  1994年   51篇
  1993年   50篇
  1992年   40篇
  1991年   23篇
  1990年   28篇
  1989年   42篇
  1988年   46篇
  1987年   28篇
  1986年   28篇
  1985年   42篇
  1984年   49篇
  1983年   41篇
  1982年   26篇
  1981年   21篇
  1980年   28篇
  1979年   23篇
  1978年   18篇
  1977年   21篇
  1976年   30篇
  1974年   16篇
排序方式: 共有8124条查询结果,搜索用时 0 毫秒
51.
With the increased advancements of smart industries, cybersecurity has become a vital growth factor in the success of industrial transformation. The Industrial Internet of Things (IIoT) or Industry 4.0 has revolutionized the concepts of manufacturing and production altogether. In industry 4.0, powerful Intrusion Detection Systems (IDS) play a significant role in ensuring network security. Though various intrusion detection techniques have been developed so far, it is challenging to protect the intricate data of networks. This is because conventional Machine Learning (ML) approaches are inadequate and insufficient to address the demands of dynamic IIoT networks. Further, the existing Deep Learning (DL) can be employed to identify anonymous intrusions. Therefore, the current study proposes a Hunger Games Search Optimization with Deep Learning-Driven Intrusion Detection (HGSODL-ID) model for the IIoT environment. The presented HGSODL-ID model exploits the linear normalization approach to transform the input data into a useful format. The HGSO algorithm is employed for Feature Selection (HGSO-FS) to reduce the curse of dimensionality. Moreover, Sparrow Search Optimization (SSO) is utilized with a Graph Convolutional Network (GCN) to classify and identify intrusions in the network. Finally, the SSO technique is exploited to fine-tune the hyper-parameters involved in the GCN model. The proposed HGSODL-ID model was experimentally validated using a benchmark dataset, and the results confirmed the superiority of the proposed HGSODL-ID method over recent approaches.  相似文献   
52.
The Journal of Supercomputing - Power consumption is likely to remain a significant concern for exascale performance in the foreseeable future. In addition, graphics processing units (GPUs) have...  相似文献   
53.
54.
The unguided visual exploration of volumetric data can be both a challenging and a time-consuming undertaking. Identifying a set of favorable vantage points at which to start exploratory expeditions can greatly reduce this effort and can also ensure that no important structures are being missed. Recent research efforts have focused on entropy-based viewpoint selection criteria that depend on scalar values describing the structures of interest. In contrast, we propose a viewpoint suggestion pipeline that is based on feature-clustering in high-dimensional space. We use gradient/normal variation as a metric to identify interesting local events and then cluster these via k-means to detect important salient composite features. Next, we compute the maximum possible exposure of these composite feature for different viewpoints and calculate a 2D entropy map parameterized in longitude and latitude to point out promising view orientations. Superimposed onto an interactive track-ball interface, users can then directly use this entropy map to quickly navigate to potentially interesting viewpoints where visibility-based transfer functions can be employed to generate volume renderings that minimize occlusions. To give full exploration freedom to the user, the entropy map is updated on the fly whenever a view has been selected, pointing to new and promising but so far unseen view directions. Alternatively, our system can also use a set-cover optimization algorithm to provide a minimal set of views needed to observe all features. The views so generated could then be saved into a list for further inspection or into a gallery for a summary presentation.  相似文献   
55.
This paper presents a new algorithm for de-noising global positioning system (GPS) and inertial navigation system (INS) data and estimates the INS error using wavelet multi-resolution analysis algorithm (WMRA)-based genetic algorithm (GA) with a well-designed structure appropriate for practical and real time implementations because of its very short training time and elevated accuracy. Different techniques have been implemented to de-noise and estimate the INS and GPS errors. Wavelet de-noising is one of th...  相似文献   
56.
Wide band mesh or star oriented networks have recently become a subject of greater interest. Providing wideband multimedia access for a variety of applications has led to the inception of mesh networks. Classic access techniques such as FDMA and TDMA have been the norm for such networks. CDMA maximum transmitter power is much less than TDMA and FDMA counter parts, which is an important asset for mobile operation. In this paper we introduce a code division multiple access/time division duplex technique CDMA/TDD for such networks. The CDMA approach is an almost play and plug technology for wireless access, making it amenable for implementation by the mesh network service station, SS. Further it inherently allows mesh network service stations to use a combination of turbo coding and dynamic parallel orthogonal transmission to improve network efficiency. We outline briefly the new transmitter and receiver structures then evaluate the efficiency, delay and delay jitter. By analysis we show the advantages over classic counter parts with respect to the total network efficiency achievable especially for larger number of hops.  相似文献   
57.
58.
In this paper, we develop an interactive analysis and visualization tool for probabilistic segmentation results in medical imaging. We provide a systematic approach to analyze, interact and highlight regions of segmentation uncertainty. We introduce a set of visual analysis widgets integrating different approaches to analyze multivariate probabilistic field data with direct volume rendering. We demonstrate the user's ability to identify suspicious regions (e.g. tumors) and correct the misclassification results using a novel uncertainty‐based segmentation editing technique. We evaluate our system and demonstrate its usefulness in the context of static and time‐varying medical imaging datasets.  相似文献   
59.
Context:How can quality of software systems be predicted before deployment? In attempting to answer this question, prediction models are advocated in several studies. The performance of such models drops dramatically, with very low accuracy, when they are used in new software development environments or in new circumstances.ObjectiveThe main objective of this work is to circumvent the model generalizability problem. We propose a new approach that substitutes traditional ways of building prediction models which use historical data and machine learning techniques.MethodIn this paper, existing models are decision trees built to predict module fault-proneness within the NASA Critical Mission Software. A genetic algorithm is developed to combine and adapt expertise extracted from existing models in order to derive a “composite” model that performs accurately in a given context of software development. Experimental evaluation of the approach is carried out in three different software development circumstances.ResultsThe results show that derived prediction models work more accurately not only for a particular state of a software organization but also for evolving and modified ones.ConclusionOur approach is considered suitable for software data nature and at the same time superior to model selection and data combination approaches. It is then concluded that learning from existing software models (i.e., software expertise) has two immediate advantages; circumventing model generalizability and alleviating the lack of data in software-engineering.  相似文献   
60.
This paper presents a hybrid technique for the classification of the magnetic resonance images (MRI). The proposed hybrid technique consists of three stages, namely, feature extraction, dimensionality reduction, and classification. In the first stage, we have obtained the features related to MRI images using discrete wavelet transformation (DWT). In the second stage, the features of magnetic resonance images have been reduced, using principal component analysis (PCA), to the more essential features. In the classification stage, two classifiers have been developed. The first classifier based on feed forward back-propagation artificial neural network (FP-ANN) and the second classifier is based on k-nearest neighbor (k-NN). The classifiers have been used to classify subjects as normal or abnormal MRI human images. A classification with a success of 97% and 98% has been obtained by FP-ANN and k-NN, respectively. This result shows that the proposed technique is robust and effective compared with other recent work.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号