全文获取类型
收费全文 | 3055篇 |
免费 | 162篇 |
国内免费 | 24篇 |
专业分类
电工技术 | 47篇 |
综合类 | 5篇 |
化学工业 | 708篇 |
金属工艺 | 93篇 |
机械仪表 | 95篇 |
建筑科学 | 99篇 |
矿业工程 | 6篇 |
能源动力 | 403篇 |
轻工业 | 270篇 |
水利工程 | 15篇 |
石油天然气 | 62篇 |
无线电 | 270篇 |
一般工业技术 | 566篇 |
冶金工业 | 154篇 |
原子能技术 | 14篇 |
自动化技术 | 434篇 |
出版年
2023年 | 60篇 |
2022年 | 127篇 |
2021年 | 184篇 |
2020年 | 124篇 |
2019年 | 110篇 |
2018年 | 189篇 |
2017年 | 137篇 |
2016年 | 153篇 |
2015年 | 90篇 |
2014年 | 139篇 |
2013年 | 257篇 |
2012年 | 155篇 |
2011年 | 195篇 |
2010年 | 121篇 |
2009年 | 140篇 |
2008年 | 103篇 |
2007年 | 92篇 |
2006年 | 90篇 |
2005年 | 47篇 |
2004年 | 58篇 |
2003年 | 68篇 |
2002年 | 44篇 |
2001年 | 30篇 |
2000年 | 30篇 |
1999年 | 29篇 |
1998年 | 54篇 |
1997年 | 35篇 |
1996年 | 30篇 |
1995年 | 41篇 |
1994年 | 18篇 |
1993年 | 28篇 |
1992年 | 20篇 |
1991年 | 19篇 |
1990年 | 14篇 |
1989年 | 21篇 |
1988年 | 12篇 |
1987年 | 11篇 |
1986年 | 10篇 |
1985年 | 21篇 |
1984年 | 16篇 |
1983年 | 8篇 |
1982年 | 15篇 |
1981年 | 12篇 |
1980年 | 13篇 |
1979年 | 9篇 |
1977年 | 7篇 |
1976年 | 6篇 |
1975年 | 6篇 |
1974年 | 6篇 |
1973年 | 11篇 |
排序方式: 共有3241条查询结果,搜索用时 15 毫秒
61.
Razen Harbi Ibrahim Abdelaziz Panos Kalnis Nikos Mamoulis Yasser Ebrahim Majed Sahli 《The VLDB Journal The International Journal on Very Large Data Bases》2016,25(3):355-380
State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds. 相似文献
62.
Vishanth Weerakkody Zahir Irani Habin Lee Nitham Hindi Ibrahim Osman 《Information Systems Management》2016,33(4):331-343
Citizens’ satisfaction is acknowledged as one of the most significant influences for e-government adoption and diffusion. This study examines the impact of information quality, system quality, trust, and cost on user satisfaction of e-government services. Using a survey, this study collected 1518 valid responses from e-government service adopters across the United Kingdom. Our empirical outcomes show the five factors identified in this study have a significant impact on U.K. citizens’ satisfaction with e-government services. 相似文献
63.
Ibrahim Abdelaziz Sherif Abdou Hassanin Al-Barhamtoshy 《Pattern Analysis & Applications》2016,19(4):1129-1141
The success of using Hidden Markov Models (HMMs) for speech recognition application has motivated the adoption of these models for handwriting recognition especially the online handwriting that has large similarity with the speech signal as a sequential process. Some languages such as Arabic, Farsi and Urdo include large number of delayed strokes that are written above or below most letters and usually written delayed in time. These delayed strokes represent a modeling challenge for the conventional left-right HMM that is commonly used for Automatic Speech Recognition (ASR) systems. In this paper, we introduce a new approach for handling delayed strokes in Arabic online handwriting recognition using HMMs. We also show that several modeling approaches such as context based tri-grapheme models, speaker adaptive training and discriminative training that are currently used in most state-of-the-art ASR systems can provide similar performance improvement for Hand Writing Recognition (HWR) systems. Finally, we show that using a multi-pass decoder that use the computationally less expensive models in the early passes can provide an Arabic large vocabulary HWR system with practical decoding time. We evaluated the performance of our proposed Arabic HWR system using two databases of small and large lexicons. For the small lexicon data set, our system achieved competing results compared to the best reported state-of-the-art Arabic HWR systems. For the large lexicon, our system achieved promising results (accuracy and time) for a vocabulary size of 64k words with the possibility of adapting the models for specific writers to get even better results. 相似文献
64.
O. Ali Sadek Ibrahim D. Landa-Silva 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2016,20(8):3045-3061
In the context of information retrieval (IR) from text documents, the term weighting scheme (TWS) is a key component of the matching mechanism when using the vector space model. In this paper, we propose a new TWS that is based on computing the average term occurrences of terms in documents and it also uses a discriminative approach based on the document centroid vector to remove less significant weights from the documents. We call our approach Term Frequency With Average Term Occurrence (TF-ATO). An analysis of commonly used document collections shows that test collections are not fully judged as achieving that is expensive and maybe infeasible for large collections. A document collection being fully judged means that every document in the collection acts as a relevant document to a specific query or a group of queries. The discriminative approach used in our proposed approach is a heuristic method for improving the IR effectiveness and performance and it has the advantage of not requiring previous knowledge about relevance judgements. We compare the performance of the proposed TF-ATO to the well-known TF-IDF approach and show that using TF-ATO results in better effectiveness in both static and dynamic document collections. In addition, this paper investigates the impact that stop-words removal and our discriminative approach have on TF-IDF and TF-ATO. The results show that both, stop-words removal and the discriminative approach, have a positive effect on both term-weighting schemes. More importantly, it is shown that using the proposed discriminative approach is beneficial for improving IR effectiveness and performance with no information on the relevance judgement for the collection. 相似文献
65.
Emad Shihab Akinori Ihara Yasutaka Kamei Walid M. Ibrahim Masao Ohira Bram Adams Ahmed E. Hassan Ken-ichi Matsumoto 《Empirical Software Engineering》2013,18(5):1005-1042
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs. 相似文献
66.
Adil Fahad Zahir Tari Ibrahim Khalil Ibrahim Habib Hussein Alnuweiri 《Computer Networks》2013,57(9):2040-2057
There is significant interest in the network management and industrial security community about the need to identify the “best” and most relevant features for network traffic in order to properly characterize user behaviour and predict future traffic. The ability to eliminate redundant features is an important Machine Learning (ML) task because it helps to identify the best features in order to improve the classification accuracy as well as to reduce the computational complexity related to the construction of the classifier. In practice, feature selection (FS) techniques can be used as a preprocessing step to eliminate irrelevant features and as a knowledge discovery tool to reveal the “best” features in many soft computing applications. In this paper, we investigate the advantages and disadvantages of such FS techniques with new proposed metrics (namely goodness, stability and similarity). We continue our efforts toward developing an integrated FS technique that is built on the key strengths of existing FS techniques. A novel way is proposed to identify efficiently and accurately the “best” features by first combining the results of some well-known FS techniques to find consistent features, and then use the proposed concept of support to select a smallest set of features and cover data optimality. The empirical study over ten high-dimensional network traffic data sets demonstrates significant gain in accuracy and improved run-time performance of a classifier compared to individual results produced by some well-known FS techniques. 相似文献
67.
68.
In this piece we take a brief peek at the possible world of robotic companions. In such a world, robots are adopted as butlers in our homes, as baby watchers, as friends and in general as life companions. For example, Helen lives with a robotic companion named Schpuffy. In the morning, Schpuffy checks Helen's schedule and finds out she has an appointment with a neighbor to take a walk at 8:30 a.m. Schpuffy recognizes that it takes 30 minutes for her to prepare to go for a walk and wakes her up at 8:00 a.m. After she gets up, Schpuffy reminds Helen of her appointment at 8:30 a.m. While she gets dressed, Schpuffy lets her know that current weather is very cold and suggests that she wear thermal clothing. When she's about to leave the house, Schpuffy says "good bye" to her and locks the door as she leaves. Schpuffy hasn't gained majored acceptance. It's not in every home or even in a tiny fraction of homes. However, it's a bold new research that is taking shape with early commercial products already hitting the marketplace. It could soon be the chief appliance in your home, or the new interface technology for smart spaces. 相似文献
69.
Khaled M. Alalayah Fatma S. Alrayes Jaber S. Alzahrani Khadija M. Alaidarous Ibrahim M. Alwayle Heba Mohsen Ibrahim Abdulrab Ahmed Mesfer Al Duhayyim 《计算机系统科学与工程》2023,46(3):3121-3139
With the increased advancements of smart industries, cybersecurity has become a vital growth factor in the success of industrial transformation. The Industrial Internet of Things (IIoT) or Industry 4.0 has revolutionized the concepts of manufacturing and production altogether. In industry 4.0, powerful Intrusion Detection Systems (IDS) play a significant role in ensuring network security. Though various intrusion detection techniques have been developed so far, it is challenging to protect the intricate data of networks. This is because conventional Machine Learning (ML) approaches are inadequate and insufficient to address the demands of dynamic IIoT networks. Further, the existing Deep Learning (DL) can be employed to identify anonymous intrusions. Therefore, the current study proposes a Hunger Games Search Optimization with Deep Learning-Driven Intrusion Detection (HGSODL-ID) model for the IIoT environment. The presented HGSODL-ID model exploits the linear normalization approach to transform the input data into a useful format. The HGSO algorithm is employed for Feature Selection (HGSO-FS) to reduce the curse of dimensionality. Moreover, Sparrow Search Optimization (SSO) is utilized with a Graph Convolutional Network (GCN) to classify and identify intrusions in the network. Finally, the SSO technique is exploited to fine-tune the hyper-parameters involved in the GCN model. The proposed HGSODL-ID model was experimentally validated using a benchmark dataset, and the results confirmed the superiority of the proposed HGSODL-ID method over recent approaches. 相似文献
70.
Skin lesions have become a critical illness worldwide, and the earlier identification of skin lesions using dermoscopic images can raise the survival rate. Classification of the skin lesion from those dermoscopic images will be a tedious task. The accuracy of the classification of skin lesions is improved by the use of deep learning models. Recently, convolutional neural networks (CNN) have been established in this domain, and their techniques are extremely established for feature extraction, leading to enhanced classification. With this motivation, this study focuses on the design of artificial intelligence (AI) based solutions, particularly deep learning (DL) algorithms, to distinguish malignant skin lesions from benign lesions in dermoscopic images. This study presents an automated skin lesion detection and classification technique utilizing optimized stacked sparse autoencoder (OSSAE) based feature extractor with backpropagation neural network (BPNN), named the OSSAE-BPNN technique. The proposed technique contains a multi-level thresholding based segmentation technique for detecting the affected lesion region. In addition, the OSSAE based feature extractor and BPNN based classifier are employed for skin lesion diagnosis. Moreover, the parameter tuning of the SSAE model is carried out by the use of sea gull optimization (SGO) algorithm. To showcase the enhanced outcomes of the OSSAE-BPNN model, a comprehensive experimental analysis is performed on the benchmark dataset. The experimental findings demonstrated that the OSSAE-BPNN approach outperformed other current strategies in terms of several assessment metrics. 相似文献