全文获取类型
收费全文 | 1109篇 |
免费 | 74篇 |
国内免费 | 13篇 |
专业分类
电工技术 | 15篇 |
综合类 | 3篇 |
化学工业 | 275篇 |
金属工艺 | 29篇 |
机械仪表 | 37篇 |
建筑科学 | 43篇 |
矿业工程 | 1篇 |
能源动力 | 54篇 |
轻工业 | 95篇 |
水利工程 | 23篇 |
石油天然气 | 5篇 |
无线电 | 112篇 |
一般工业技术 | 236篇 |
冶金工业 | 55篇 |
原子能技术 | 42篇 |
自动化技术 | 171篇 |
出版年
2024年 | 4篇 |
2023年 | 13篇 |
2022年 | 52篇 |
2021年 | 82篇 |
2020年 | 65篇 |
2019年 | 70篇 |
2018年 | 67篇 |
2017年 | 71篇 |
2016年 | 95篇 |
2015年 | 37篇 |
2014年 | 61篇 |
2013年 | 84篇 |
2012年 | 72篇 |
2011年 | 86篇 |
2010年 | 60篇 |
2009年 | 39篇 |
2008年 | 44篇 |
2007年 | 26篇 |
2006年 | 13篇 |
2005年 | 11篇 |
2004年 | 6篇 |
2003年 | 5篇 |
2002年 | 8篇 |
2001年 | 6篇 |
2000年 | 5篇 |
1999年 | 2篇 |
1998年 | 18篇 |
1997年 | 8篇 |
1996年 | 12篇 |
1995年 | 7篇 |
1994年 | 5篇 |
1993年 | 6篇 |
1992年 | 4篇 |
1991年 | 5篇 |
1990年 | 7篇 |
1989年 | 7篇 |
1988年 | 9篇 |
1987年 | 3篇 |
1986年 | 2篇 |
1985年 | 3篇 |
1984年 | 1篇 |
1983年 | 1篇 |
1982年 | 2篇 |
1981年 | 3篇 |
1980年 | 2篇 |
1979年 | 3篇 |
1977年 | 2篇 |
1974年 | 1篇 |
1972年 | 1篇 |
排序方式: 共有1196条查询结果,搜索用时 0 毫秒
81.
Three-dimensional (3D) shape reconstruction is a fundamental problem in machine vision applications. Shape From Focus (SFF) is one of the passive optical methods for 3D shape recovery that uses degree of focus as a cue to estimate 3D shape. In this approach, usually a single focus measure operator is applied to measure the focus quality of each pixel in the image sequence. However, the applicability of a single focus measure is limited to estimate accurately the depth map for diverse type of real objects. To address this problem, we develop Optimal Composite Depth (OCD) function through genetic programming (GP) for accurate depth estimation. The OCD function is constructed by optimally combining the primary information extracted using one/or more focus measures. The genetically developed composite function is then used to compute the optimal depth map of objects. The performance of the developed nonlinear function is investigated using both the synthetic and the real world image sequences. Experimental results demonstrate that the proposed estimator is more useful in computing accurate depth maps as compared to the existing SFF methods. Moreover, it is found that the heterogeneous function is more effective than homogeneous function. 相似文献
82.
With the easy access to the huge volume of articles available on the Internet, plagiarism is getting worse and worse. Most recent approaches proposed to address this problem usually focus on achieving better accuracy of similarity detection process. However, there are some real applications where plagiarized contents should be detected without revealing any information. Moreover, in such web-based applications, running time, memory consumption, communication and computational complexity should be also taken into account. In this paper, we propose a similar document detection system based on matrix Bloom filter, a new extension of standard Bloom filter. The experimental results on a real dataset show that the system can achieve 98% of accuracy. We also compare our approach with a method recently proposed for the same purpose. The results of the comparison show that the Bloom filter-based approach achieves much better performance than other in terms of the aforementioned factors. 相似文献
83.
Mahmood Fazlali Mojtaba Sabeghi Ali Zakerolhosseini Koen Bertels 《Journal of Systems Architecture》2010,56(11):623-632
Recent research indicates the promising performance of employing reconfigurable systems to accelerate multimedia and communication applications. Nonetheless, they are yet to be widely adopted. One reason is the lack of efficient operating system support for these platforms. In this paper, we address the problem of runtime task scheduling as a main part of the operating systems. To do so, a new task replacement parameter, called Time-Improvement, is proposed for compiler assisted scheduling algorithms. In contrast with most related approach, we validate our approach using real application workload obtained from an application for multimedia test remotely taken by students. The proposed online task scheduling algorithm outperforms previous algorithms and accelerates task execution from 4% up to 20%. 相似文献
84.
The Journal of Supercomputing - Parallel implementation provides a solution for the problem of accelerating cellular automata (CA)-based secret sharing schemes and make them appropriate for bulk... 相似文献
85.
Wang Lu-di Zhou Wei Xing Ying Liu Na Movahedipour Mahmood Zhou Xiao-guang 《浙江大学学报:C卷英文版》2019,20(3):405-413
Frontiers of Information Technology & Electronic Engineering - Reconstruction of a 12-lead electrocardiogram (ECG) from a serial 3-lead ECG has been researched in the past to satisfy the need... 相似文献
86.
Mahmood Safaei Abul Samad Ismail Hassan Chizari Maha Driss Wadii Boulila Shahla Asadi Mitra Safaei 《Software》2020,50(4):428-446
Wireless sensor networks (WSNs) consist of small sensors with limited computational and communication capabilities. Reading data in WSN is not always reliable due to open environmental factors such as noise, weakly received signal strength, and intrusion attacks. The process of detecting highly noisy data is called anomaly or outlier detection. The challenging aspect of noise detection in WSN is related to the limited computational and communication capabilities of sensors. The purpose of this research is to design a local time-series-based data noise and anomaly detection approach for WSN. The proposed local outlier detection algorithm (LODA) is a decentralized noise detection algorithm that runs on each sensor node individually with three important features: reduction mechanism that eliminates the noneffective features, determination of the memory size of data histogram to accomplish the effective available memory, and classification for predicting noisy data. An adaptive Bayesian network is used as the classification algorithm for prediction and identification of outliers in each sensor node locally. Results of our approach are compared with four well-known algorithms using benchmark real-life datasets, which demonstrate that LODA can achieve higher (up to 89%) accuracy in the prediction of outliers in real sensory data. 相似文献
87.
Susan Sabra Khalid Mahmood Malik Muhammad Afzal Vian Sabeeh Ahmad Charaf Eddine 《Expert Systems》2020,37(1):e12388
Clinical narratives such as progress summaries, lab reports, surgical reports, and other narrative texts contain key biomarkers about a patient's health. Evidence-based preventive medicine needs accurate semantic and sentiment analysis to extract and classify medical features as the input to appropriate machine learning classifiers. However, the traditional approach of using single classifiers is limited by the need for dimensionality reduction techniques, statistical feature correlation, a faster learning rate, and the lack of consideration of the semantic relations among features. Hence, extracting semantic and sentiment-based features from clinical text and combining multiple classifiers to create an ensemble intelligent system overcomes many limitations and provides a more robust prediction outcome. The selection of an appropriate approach and its interparameter dependency becomes key for the success of the ensemble method. This paper proposes a hybrid knowledge and ensemble learning framework for prediction of venous thromboembolism (VTE) diagnosis consisting of the following components: a VTE ontology, semantic extraction and sentiment assessment of risk factor framework, and an ensemble classifier. Therefore, a component-based analysis approach was adopted for evaluation using a data set of 250 clinical narratives where knowledge and ensemble achieved the following results with and without semantic extraction and sentiment assessment of risk factor, respectively: a precision of 81.8% and 62.9%, a recall of 81.8% and 57.6%, an F measure of 81.8% and 53.8%, and a receiving operating characteristic of 80.1% and 58.5% in identifying cases of VTE. 相似文献
88.
Barbara A. Kitchenham Pearl Brereton Mark Turner Mahmood K. Niazi Stephen Linkman Rialette Pretorius David Budgen 《Empirical Software Engineering》2010,15(6):618-653
Systematic literature reviews (SLRs) are a major tool for supporting evidence-based software engineering. Adapting the procedures
involved in such a review to meet the needs of software engineering and its literature remains an ongoing process. As part
of this process of refinement, we undertook two case studies which aimed 1) to compare the use of targeted manual searches
with broad automated searches and 2) to compare different methods of reaching a consensus on quality. For Case 1, we compared
a tertiary study of systematic literature reviews published between January 1, 2004 and June 30, 2007 which used a manual
search of selected journals and conferences and a replication of that study based on a broad automated search. We found that
broad automated searches find more studies than manual restricted searches, but they may be of poor quality. Researchers undertaking
SLRs may be justified in using targeted manual searches if they intend to omit low quality papers, or they are assessing research
trends in research methodologies. For Case 2, we analyzed the process used to evaluate the quality of SLRs. We conclude that
if quality evaluation of primary studies is a critical component of a specific SLR, assessments should be based on three independent
evaluators incorporating at least two rounds of discussion. 相似文献
89.
Research using Internet surveys is an emerging field, yet research on the legitimacy of using Internet studies, particularly those targeting sensitive topics, remains under-investigated. The current study builds on the existing literature by exploring the demographic differences between Internet panel and RDD telephone survey samples, as well as differences in responses with regard to experiences of intimate partner violence perpetration and victimization, alcohol and substance use/abuse, PTSD symptomatology, and social support. Analyses indicated that after controlling for demographic differences, there were few differences between the samples in their disclosure of sensitive information, and that the online sample was more socially isolated than the phone sample. Results are discussed in terms of their implications for using Internet samples in research on sensitive topics. 相似文献
90.
Mohammad Mahdi Dehshibi Mahmood Fazlali Jamshid Shanbehzadeh 《Multimedia Tools and Applications》2014,72(3):2249-2273
Projection Functions have been widely used for facial feature extraction and optical/handwritten character recognition due to their simplicity and efficiency. Because these transformations are not one-to-one, they may result in mapping distinct points into one point, and consequently losing detailed information. Here, we solve this problem by defining an N-dimensional space to represent a single image. Then, we propose a one-to-one transformation in this new image space. The proposed method, which we referred to as Linear Principal Transformation (LPT), utilizes Eigen analysis to extract the vector with the highest Eigenvalue. Afterwards, extrema in this vector were analyzed to extract the features of interest. In order to evaluate the proposed method, we performed two sets of experiments on facial feature extraction and optical character recognition in three different data sets. The results show that the proposed algorithm outperforms the observed algorithms in the paper and achieves accuracy from 1.4 % up to 14 %, while it has a comparable time complexity and efficiency. 相似文献