全文获取类型
收费全文 | 1164篇 |
免费 | 53篇 |
国内免费 | 15篇 |
专业分类
电工技术 | 19篇 |
综合类 | 2篇 |
化学工业 | 257篇 |
金属工艺 | 25篇 |
机械仪表 | 32篇 |
建筑科学 | 52篇 |
矿业工程 | 1篇 |
能源动力 | 81篇 |
轻工业 | 134篇 |
水利工程 | 9篇 |
石油天然气 | 11篇 |
无线电 | 133篇 |
一般工业技术 | 188篇 |
冶金工业 | 104篇 |
原子能技术 | 12篇 |
自动化技术 | 172篇 |
出版年
2024年 | 6篇 |
2023年 | 17篇 |
2022年 | 45篇 |
2021年 | 50篇 |
2020年 | 55篇 |
2019年 | 47篇 |
2018年 | 55篇 |
2017年 | 69篇 |
2016年 | 53篇 |
2015年 | 33篇 |
2014年 | 39篇 |
2013年 | 102篇 |
2012年 | 53篇 |
2011年 | 54篇 |
2010年 | 48篇 |
2009年 | 44篇 |
2008年 | 44篇 |
2007年 | 25篇 |
2006年 | 29篇 |
2005年 | 23篇 |
2004年 | 16篇 |
2003年 | 12篇 |
2002年 | 7篇 |
2001年 | 7篇 |
2000年 | 16篇 |
1999年 | 11篇 |
1998年 | 38篇 |
1997年 | 22篇 |
1996年 | 22篇 |
1995年 | 20篇 |
1994年 | 17篇 |
1993年 | 16篇 |
1992年 | 15篇 |
1991年 | 6篇 |
1990年 | 9篇 |
1989年 | 11篇 |
1988年 | 9篇 |
1987年 | 7篇 |
1986年 | 3篇 |
1985年 | 11篇 |
1984年 | 14篇 |
1983年 | 7篇 |
1982年 | 10篇 |
1981年 | 4篇 |
1980年 | 4篇 |
1977年 | 3篇 |
1976年 | 4篇 |
1973年 | 3篇 |
1969年 | 3篇 |
1940年 | 3篇 |
排序方式: 共有1232条查询结果,搜索用时 15 毫秒
101.
Information technology (IT) in Saudi Arabia: Culture and the acceptance and use of IT 总被引:2,自引:0,他引:2
The unified theory of acceptance and use of technology (UTAUT), a model of the user acceptance of IT, synthesizes elements from several prevailing user acceptance models. It has been credited with explaining a larger proportion of the variance of ‘intention to use’ and ‘usage behavior’ than do preceding models. However, it has not been validated in non-Western cultures. Using a survey sample collected from 722 knowledge workers using desktop computer applications on a voluntary basis in Saudi Arabia, we examined the relative power of a modified version of UTAUT in determining ‘intention to use’ and ‘usage behavior’. We found that the model explained 39.1% of intention to use variance, and 42.1% of usage variance. In addition, drawing on the theory of cultural dimensions, we hypothesized and tested the similarities and differences between the North American and Saudi validations of UTAUT in terms of cultural differences that affected the organizational acceptance of IT in the two societies. 相似文献
102.
Class decomposition describes the process of segmenting each class into a number of homogeneous subclasses. This can be naturally achieved through clustering. Utilising class decomposition can provide a number of benefits to supervised learning, especially ensembles. It can be a computationally efficient way to provide a linearly separable data set without the need for feature engineering required by techniques like support vector machines and deep learning. For ensembles, the decomposition is a natural way to increase diversity, a key factor for the success of ensemble classifiers. In this paper, we propose to adopt class decomposition to the state-of-the-art ensemble learning Random Forests. Medical data for patient diagnosis may greatly benefit from this technique, as the same disease can have a diverse of symptoms. We have experimentally validated our proposed method on a number of data sets that are mainly related to the medical domain. Results reported in this paper show clearly that our method has significantly improved the accuracy of Random Forests. 相似文献
103.
Laith Mohammad Abualigah Ahamad Tajudin Khader Essam Said Hanandeh 《Applied Intelligence》2018,48(11):4047-4071
In this paper, a novel text clustering method, improved krill herd algorithm with a hybrid function, called MMKHA, is proposed as an efficient clustering way to obtain promising and precise results in this domain. Krill herd is a new swarm-based optimization algorithm that imitates the behavior of a group of live krill. The potential of this algorithm is high because it performs better than other optimization methods; it balances the process of exploration and exploitation by complementing the strength of local nearby searching and global wide-range searching. Text clustering is the process of grouping significant amounts of text documents into coherent clusters in which documents in the same cluster are relevant. For the purpose of the experiments, six versions are thoroughly investigated to determine the best version for solving the text clustering. Eight benchmark text datasets are used for the evaluation process available at the Laboratory of Computational Intelligence (LABIC). Seven evaluation measures are utilized to validate the proposed algorithms, namely, ASDC, accuracy, precision, recall, F-measure, purity, and entropy. The proposed algorithms are compared with the other successful algorithms published in the literature. The results proved that the proposed improved krill herd algorithm with hybrid function achieved almost all the best results for all datasets in comparison with the other comparative algorithms. 相似文献
104.
Load balancing is a crucial factor in IPTV delivery networks. Load balancing aims at utilizing the resources efficiently, maximizing the throughput, and minimizing the request rejection rate. The peer-service area is the recent architecture for IPTV delivery networks that overcomes the flaws of the previous architectures. However, it still suffers from the load imbalance problem. This paper investigates the load imbalance problem, and tries to augment the peer-service area architecture to overcome this problem. To achieve the load balancing over the proposed architecture, we suggest a new load-balancing algorithm that considers both the expected and the current load of both contents and servers. The proposed load-balancing algorithm consists of two stages. The first stage is the contents replication according to their expected load, while the second stage is the content-aware request distribution. To test the effectiveness of the proposed algorithm, we have compared it with both the traditional Round Robin algorithm and Cho algorithm. The experimental results depict that the proposed algorithm outperforms the two other algorithms in terms of load balance, throughput, and request rejection rate. 相似文献
105.
It is predicted by the year 2020, more than 50 billion devices will be connected to the Internet. Traditionally, cloud computing has been used as the preferred platform for aggregating, processing, and analyzing IoT traffic. However, the cloud may not be the preferred platform for IoT devices in terms of responsiveness and immediate processing and analysis of IoT data and requests. For this reason, fog or edge computing has emerged to overcome such problems, whereby fog nodes are placed in close proximity to IoT devices. Fog nodes are primarily responsible of the local aggregation, processing, and analysis of IoT workload, thereby resulting in significant notable performance and responsiveness. One of the open issues and challenges in the area of fog computing is efficient scalability in which a minimal number of fog nodes are allocated based on the IoT workload and such that the SLA and QoS parameters are satisfied. To address this problem, we present a queuing mathematical and analytical model to study and analyze the performance of fog computing system. Our mathematical model determines under any offered IoT workload the number of fog nodes needed so that the QoS parameters are satisfied. From the model, we derived formulas for key performance metrics which include system response time, system loss rate, system throughput, CPU utilization, and the mean number of messages request. Our analytical model is cross-validated using discrete event simulator simulations. 相似文献
106.
Anas Hatim Said Belkouch Mohamed El Aakif Moha M’rabet Hassani Noureddine Chabini 《Multimedia Tools and Applications》2013,67(3):667-685
The Discrete Cosine Transform (DCT) is one of the most widely used techniques for image compression. Several algorithms are proposed to implement the DCT-2D. The scaled SDCT algorithm is an optimization of the DCT-1D, which consists in gathering all the multiplications at the end. In this paper, in addition to the hardware implementation on an FPGA, an extended optimization has been performed by merging the multiplications in the quantization block without having an impact on the image quality. A simplified quantization has been performed also to keep higher the performances of the all chain. Tests using MATLAB environment have shown that our proposed approach produces images with nearly the same quality of the ones obtained using the JPEG standard. FPGA-based implementations of this proposed approach is presented and compared to other state of the art techniques. The target is an an Altera Cyclone II FPGA using the Quartus synthesis tool. Results show that our approach outperforms the other ones in terms of processing-speed, used resources and power consumption. A comparison has been done between this architecture and a distributed arithmetic based architecture. 相似文献
107.
Phase equilibrium relations in the V2O3-La2O3 system were investigated by X-ray powder diffraction and metallographic techniques. Binary mixtures, prepared from high-purity V2O3 and La2O3 powders, were equilibrated at 1600° C and then arc-melted under a partial pressure of argon. The specimens were heat-treated at various predetermined temperatures for prolonged periods and the phases present were identified by reflected-light microscopy and X-ray powder diffraction. The system consists of only one binary compound LaVO3. A eutectic between V2O3 and LaVO3 was established at 1750° C and 19 mol % La2O3 and also between LaVO3 and La2O3 at 1765° C and 75 moi % La2O3. No appreciable solid solubility was detected in the system. 相似文献
108.
In this paper a hybrid finite element method is applied in evaluation of the stress intensity factors K
I
and K
II
of unidirectional fiber reinforced composites. In order to satisfy the stress singularity at the crack tip a singular super-element based on a modified complementary energy principle is developed. The stress and displacement fields in the super-element are expressed in terms of polynomials of two complex variables 1 and 2 in the transformed -plane. The stiffness matrix of the super-element was determined by using a line integral along the boundary of the super-element. The displacement vector was expressed in terms of the element nodal displacement vector {q} and a properly selected shape function defined along the element boundary.Numerical results for K
I
and K
II
of glass-epoxy and graphite-epoxy unidirectional composites with cracks along the diameter of a circular cut out as well as elliptical cut outs were evaluated
Résumé On applique, dans la présente étude, une méthode d'éléments finis hybrides à l'évaluation des facteurs d'intensité de contrainte KI et KII pour des composites renforcés de fibres unidirectionnelles. Pour tenir compte de la singularité de la contrainte à l'extrémité de la fissure, on développe un super élément singulier en se basant sur un principe modifié d'énergie complémentaire. Les champs de contraintes et de déplacements dans le super-élément sont exprimés sous forme polynormale de deux variables complexes 1, et 2 dans le plan de la transformée. La matrice de rigidité du super élément est, quant à elle, définie en utilisant une intégrale linéaire le long du contour de l'élément. Le vecteur de déplacement est exprimé par un vecteur (9) de déplacement nodal de l'élément, et par une fonction de forme appropriée, définie le long du contour de l'élément.On évalue les résultats numériques pour KI et KII, correspondant à des composites à fibres unidirection-nelles de types verre-epoxy et graphite-epoxy, oú des fissures se situeraient sur le diamètre de découpes circulaires et elliptiques相似文献
109.
Ali Hemmatifar Mohammad Said Saidi Arman Sadeghi Mahdi Sani 《Microfluidics and nanofluidics》2013,14(1-2):265-276
Dielectrophoresis (DEP) is an electrokinetic phenomenon which is used for manipulating micro- and nanoparticles in micron-sized devices with high sensitivity. In recent years, electrode-based DEP by patterning narrow oblique electrodes in microchannels has been used for particle manipulation. In this theoretic study, a microchannel with triangular electrodes is presented and a detailed comparison with oblique electrodes is made. For each shape, the behavior of particles is compared for three different configurations of applied voltages. Electric field, resultant DEP force, and particle trajectories for configurations are computed by means of Rayan native code. The separation efficiency of the two systems is assessed and compared afterward. The results demonstrate higher lateral DEP force, responsible for particle separation, distributed wider across the channel width for triangular shape electrodes in comparison with the oblique ones. The proposed electrode shape also shows the ability of particle separation by attracting negative DEP particles to or propelling them from the flow centerline, according to the configuration of applied voltages. A major deficiency of the oblique electrodes, which is the streamwise variation of the lateral DEP force direction near the electrodes, is also eliminated in the proposed electrode shape. In addition, with a proper voltages configuration, the triangular electrodes require lower voltages for particle focusing in comparison with the oblique ones. 相似文献
110.
Luana Batista Luis Da Costa Said Berriah Helmut Lademann 《Expert systems with applications》2013,40(8):3128-3136
The Chlor-Alkali production is one of the largest industrial scale electro-synthesis in the world. Plants with more than 1000 individual reactors are common, where chlorine and hydrogen are only separated by 0.2 mm thin membranes. Wrong operating conditions can cause explosions and highly toxic gas releases, but also irreversible damages of very expensive cell components with dramatic maintenance costs and production loss. In this paper, a Multi-Expert System based on first-order logic rules and Decision Forests is proposed to detect any abnormal operating conditions of membrane cell electrolyzers and to advice the operator accordingly. Robustness to missing data – which represents an important issue in industrial applications in general – is achieved by means of a Dynamic Selection strategy. Experiments performed with real-world electrolyzer data indicate that the proposed system can significantly detect the different operating modes, even in the presence of high levels of missing data – or “wrong” data, as a consequence of maloperation –, which is essential for precise fault detection and advice generation. 相似文献