首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   605篇
  免费   10篇
  国内免费   1篇
电工技术   13篇
化学工业   146篇
金属工艺   30篇
机械仪表   11篇
建筑科学   22篇
矿业工程   1篇
能源动力   41篇
轻工业   22篇
水利工程   6篇
石油天然气   3篇
无线电   104篇
一般工业技术   103篇
冶金工业   69篇
原子能技术   1篇
自动化技术   44篇
  2024年   2篇
  2023年   11篇
  2022年   17篇
  2021年   27篇
  2020年   13篇
  2019年   21篇
  2018年   17篇
  2017年   13篇
  2016年   16篇
  2015年   3篇
  2014年   13篇
  2013年   23篇
  2012年   19篇
  2011年   29篇
  2010年   26篇
  2009年   28篇
  2008年   27篇
  2007年   20篇
  2006年   19篇
  2005年   15篇
  2004年   16篇
  2003年   14篇
  2002年   18篇
  2001年   14篇
  2000年   8篇
  1999年   12篇
  1998年   12篇
  1997年   10篇
  1996年   8篇
  1995年   6篇
  1994年   7篇
  1993年   6篇
  1992年   8篇
  1991年   16篇
  1990年   14篇
  1989年   7篇
  1988年   7篇
  1987年   8篇
  1986年   3篇
  1985年   9篇
  1984年   13篇
  1983年   4篇
  1982年   12篇
  1981年   6篇
  1980年   4篇
  1978年   2篇
  1976年   2篇
  1975年   2篇
  1972年   2篇
  1970年   2篇
排序方式: 共有616条查询结果,搜索用时 15 毫秒
11.
The success of using Hidden Markov Models (HMMs) for speech recognition application has motivated the adoption of these models for handwriting recognition especially the online handwriting that has large similarity with the speech signal as a sequential process. Some languages such as Arabic, Farsi and Urdo include large number of delayed strokes that are written above or below most letters and usually written delayed in time. These delayed strokes represent a modeling challenge for the conventional left-right HMM that is commonly used for Automatic Speech Recognition (ASR) systems. In this paper, we introduce a new approach for handling delayed strokes in Arabic online handwriting recognition using HMMs. We also show that several modeling approaches such as context based tri-grapheme models, speaker adaptive training and discriminative training that are currently used in most state-of-the-art ASR systems can provide similar performance improvement for Hand Writing Recognition (HWR) systems. Finally, we show that using a multi-pass decoder that use the computationally less expensive models in the early passes can provide an Arabic large vocabulary HWR system with practical decoding time. We evaluated the performance of our proposed Arabic HWR system using two databases of small and large lexicons. For the small lexicon data set, our system achieved competing results compared to the best reported state-of-the-art Arabic HWR systems. For the large lexicon, our system achieved promising results (accuracy and time) for a vocabulary size of 64k words with the possibility of adapting the models for specific writers to get even better results.  相似文献   
12.
Non-symmetric similarity relation-based rough set model (NS-RSM) is viewed as mathematical tool to deal with the analysis of imprecise and uncertain information in incomplete information systems with “?” values. NS-RSM relies on the concept of non-symmetric similarity relation to group equivalent objects and generate knowledge granules that are then used to approximate the target set. However, NS-RSM results in unpromising approximation space when addressing inconsistent data sets that have lots of boundary objects. This is because objects in the same similarity classes are not necessarily similar to each other and may belong to different target classes. To enhance NS-RSM capability, we introduce the maximal limited similarity-based rough set model (MLS-RSM) which describes the maximal collection of indistinguishable objects that are limited tolerance to each other in similarity classes. This allows accurate computation to be done for the approximation space. Furthermore, approximation accuracy comparisons have been conducted among NS-RSM and MLS-RSM. The results demonstrate that MLS-RSM model outperforms NS-RSM and can approximate the target set more efficiently.  相似文献   
13.
From the perspective of data security, which has always been an important aspect of quality of service, cloud computing focuses a new challenging security threats. Therefore, a data security model must solve the most challenges of cloud computing security. The proposed data security model provides a single default gateway as a platform. It used to secure sensitive user data across multiple public and private cloud applications, including Salesforce, Chatter, Gmail, and Amazon Web Services, without influencing functionality or performance. Default gateway platform encrypts sensitive data automatically in a real time before sending to the cloud storage without breaking cloud application. It did not effect on user functionality and visibility. If an unauthorized person gets data from cloud storage, he only sees encrypted data. If authorized person accesses successfully in his cloud, the data is decrypted in real time for your use. The default gateway platform must contain strong and fast encryption algorithm, file integrity, malware detection, firewall, tokenization and more. This paper interested about authentication, stronger and faster encryption algorithm, and file integrity.  相似文献   
14.
Miniaturization and energy consumption by computational systems remain major challenges to address. Optoelectronics based synaptic and light sensing provide an exciting platform for neuromorphic processing and vision applications offering several advantages. It is highly desirable to achieve single-element image sensors that allow reception of information and execution of in-memory computing processes while maintaining memory for much longer durations without the need for frequent electrical or optical rehearsals. In this work, ultra-thin (<3 nm) doped indium oxide (In2O3) layers are engineered to demonstrate a monolithic two-terminal ultraviolet (UV) sensing and processing system with long optical state retention operating at 50 mV. This endows features of several conductance states within the persistent photocurrent window that are harnessed to show learning capabilities and significantly reduce the number of rehearsals. The atomically thin sheets are implemented as a focal plane array (FPA) for UV spectrum based proof-of-concept vision system capable of pattern recognition and memorization required for imaging and detection applications. This integrated light sensing and memory system is deployed to illustrate capabilities for real-time, in-sensor memorization, and recognition tasks. This study provides an important template to engineer miniaturized and low operating voltage neuromorphic platforms across the light spectrum based on application demand.  相似文献   
15.
16.
With the rapid development of emerging 5G and beyond (B5G), Unmanned Aerial Vehicles (UAVs) are increasingly important to improve the performance of dense cellular networks. As a conventional metric, coverage probability has been widely studied in communication systems due to the increasing density of users and complexity of the heterogeneous environment. In recent years, stochastic geometry has attracted more attention as a mathematical tool for modeling mobile network systems. In this paper, an analytical approach to the coverage probability analysis of UAV-assisted cellular networks with imperfect beam alignment has been proposed. An assumption was considered that all users are distributed according to Poisson Cluster Process (PCP) around base stations, in particular, Thomas Cluster Process (TCP). Using this model, the impact of beam alignment errors on the coverage probability was investigated. Initially, the Probability Density Function (PDF) of directional antenna gain between the user and its serving base station was obtained. Then, association probability with each tier was achieved. A tractable expression was derived for coverage probability in both Line-of-Sight (LoS) and Non-Line-of-Sight (NLoS) condition links. Numerical results demonstrated that at low UAVs altitude, beam alignment errors significantly degrade coverage performance. Moreover, for a small cluster size, alignment errors do not necessarily affect the coverage performance.  相似文献   
17.
Classification of electroencephalogram (EEG) signals for humans can be achieved via artificial intelligence (AI) techniques. Especially, the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions. From this perspective, an automated AI technique with a digital processing method can be used to improve these signals. This paper proposes two classifiers: long short-term memory (LSTM) and support vector machine (SVM) for the classification of seizure and non-seizure EEG signals. These classifiers are applied to a public dataset, namely the University of Bonn, which consists of 2 classes –seizure and non-seizure. In addition, a fast Walsh-Hadamard Transform (FWHT) technique is implemented to analyze the EEG signals within the recurrence space of the brain. Thus, Hadamard coefficients of the EEG signals are obtained via the FWHT. Moreover, the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings. Also, a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers. The LSTM classifier provides the best performance, with a testing accuracy of 99.00%. The training and testing loss rates for the LSTM are 0.0029 and 0.0602, respectively, while the weighted average precision, recall, and F1-score for the LSTM are 99.00%. The results of the SVM classifier in terms of accuracy, sensitivity, and specificity reached 91%, 93.52%, and 91.3%, respectively. The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s, respectively. The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals. Eventually, the proposed classifiers provide high classification accuracy compared to previously published classifiers.  相似文献   
18.
Computer graphics is ostensibly based on projective geometry. The graphics pipeline—the sequence of functions applied to 3D geometric primitives to determine a 2D image—is described in the graphics literature as taking the primitives from Euclidean to projective space, and then back to Euclidean space. This is a weak foundation for computer graphics. An instructor is at a loss: one day entering the classroom and invoking the established and venerable theory of projective geometry while asserting that projective spaces are not separable, and then entering the classroom the following week to tell the students that the standard graphics pipeline performs clipping not in Euclidean, but in projective space—precisely the operation (deciding sidedness, which depends on separability) that was deemed nonsensical. But there is no need to present Blinn and Newell’s algorithm (Comput. Graph. 12, 245–251, 1978; Commun. ACM 17, 32–42, 1974)—the crucial clipping step in the graphics pipeline and, perhaps, the most original knowledge a student learns in a fourth-year computer graphics class—as a clever trick that just works. Jorge Stolfi described in 1991 oriented projective geometry. By declaring the two vectors and distinct, Blinn and Newell were already unknowingly working in oriented projective space. This paper presents the graphics pipeline on this stronger foundation.
Sherif GhaliEmail:
  相似文献   
19.
20.
In this paper, we fill a gap in the literature by studying the problem of Arabic handwritten digit recognition. The performances of different classification and feature extraction techniques on recognizing Arabic digits are going to be reported to serve as a benchmark for future work on the problem. The performance of well known classifiers and feature extraction techniques will be reported in addition to a novel feature extraction technique we present in this paper that gives a high accuracy and competes with the state-of-the-art techniques. A total of 54 different classifier/features combinations will be evaluated on Arabic digits in terms of accuracy and classification time. The results are analyzed and the problem of the digit ‘0’ is identified with a proposed method to solve it. Moreover, we propose a strategy to select and design an optimal two-stage system out of our study and, hence, we suggest a fast two-stage classification system for Arabic digits which achieves as high accuracy as the highest classifier/features combination but with much less recognition time.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号