全文获取类型
收费全文 | 6031篇 |
免费 | 462篇 |
国内免费 | 47篇 |
专业分类
电工技术 | 82篇 |
综合类 | 32篇 |
化学工业 | 1607篇 |
金属工艺 | 184篇 |
机械仪表 | 283篇 |
建筑科学 | 266篇 |
矿业工程 | 12篇 |
能源动力 | 293篇 |
轻工业 | 511篇 |
水利工程 | 122篇 |
石油天然气 | 82篇 |
武器工业 | 3篇 |
无线电 | 620篇 |
一般工业技术 | 1000篇 |
冶金工业 | 226篇 |
原子能技术 | 46篇 |
自动化技术 | 1171篇 |
出版年
2024年 | 21篇 |
2023年 | 79篇 |
2022年 | 150篇 |
2021年 | 309篇 |
2020年 | 332篇 |
2019年 | 390篇 |
2018年 | 493篇 |
2017年 | 420篇 |
2016年 | 394篇 |
2015年 | 248篇 |
2014年 | 448篇 |
2013年 | 682篇 |
2012年 | 475篇 |
2011年 | 488篇 |
2010年 | 311篇 |
2009年 | 294篇 |
2008年 | 189篇 |
2007年 | 141篇 |
2006年 | 107篇 |
2005年 | 67篇 |
2004年 | 51篇 |
2003年 | 41篇 |
2002年 | 27篇 |
2001年 | 28篇 |
2000年 | 24篇 |
1999年 | 28篇 |
1998年 | 33篇 |
1997年 | 29篇 |
1996年 | 29篇 |
1995年 | 30篇 |
1994年 | 15篇 |
1993年 | 18篇 |
1992年 | 12篇 |
1991年 | 16篇 |
1990年 | 11篇 |
1989年 | 8篇 |
1988年 | 9篇 |
1987年 | 9篇 |
1986年 | 12篇 |
1985年 | 6篇 |
1984年 | 11篇 |
1982年 | 8篇 |
1980年 | 4篇 |
1979年 | 4篇 |
1975年 | 3篇 |
1973年 | 5篇 |
1971年 | 6篇 |
1970年 | 8篇 |
1969年 | 4篇 |
1967年 | 2篇 |
排序方式: 共有6540条查询结果,搜索用时 15 毫秒
101.
Interoperability is the ability of systems to provide services to and accept services from other systems, and to use the services exchanged so as to operate together in a more effective manner. The fact that interoperability can be improved means that the metrics for measuring interoperability can be defined. For the purpose of measuring the interoperability between systems, an interoperability assessment model is required. This paper deals with the existing interoperability assessment models. A compara- tive analysis among these models is provided to evaluate the similarities and differences in their philosophy and implementation. The analysis yields a set of recommendations for any party that is open to the idea of creating or improving an interoperability assessment model. 相似文献
102.
M. Hariharan C.Y. Fook R. Sindhu Abdul Hamid Adom Sazali Yaacob 《Digital Signal Processing》2013,23(3):952-959
Dysfluency and stuttering are a break or interruption of normal speech such as repetition, prolongation, interjection of syllables, sounds, words or phrases and involuntary silent pauses or blocks in communication. Stuttering assessment through manual classification of speech dysfluencies is subjective, inconsistent, time consuming and prone to error. This paper proposes an objective evaluation of speech dysfluencies based on the wavelet packet transform with sample entropy features. Dysfluent speech signals are decomposed into six levels by using wavelet packet transform. Sample entropy (SampEn) features are extracted at every level of decomposition and they are used as features to characterize the speech dysfluencies (stuttered events). Three different classifiers such as k-nearest neighbor (kNN), linear discriminant analysis (LDA) based classifier and support vector machine (SVM) are used to investigate the performance of the sample entropy features for the classification of speech dysfluencies. 10-fold cross validation method is used for testing the reliability of the classifier results. The effect of different wavelet families on the classification performance is also performed. Experimental results demonstrate that the proposed features and classification algorithms give very promising classification accuracy of 96.67% with the standard deviation of 0.37 and also that the proposed method can be used to help speech language pathologist in classifying speech dysfluencies. 相似文献
103.
Mehdi Banitalebi Dehkordi Hamid Reza Abutalebi Mohammad Reza Taban 《Digital Signal Processing》2013,23(4):1239-1246
In this paper, we propose a source localization algorithm based on a sparse Fast Fourier Transform (FFT)-based feature extraction method and spatial sparsity. We represent the sound source positions as a sparse vector by discretely segmenting the space with a circular grid. The location vector is related to microphone measurements through a linear equation, which can be estimated at each microphone. For this linear dimensionality reduction, we have utilized a Compressive Sensing (CS) and two-level FFT-based feature extraction method which combines two sets of audio signal features and covers both short-time and long-time properties of the signal. The proposed feature extraction method leads to a sparse representation of audio signals. As a result, a significant reduction in the dimensionality of the signals is achieved. In comparison to the state-of-the-art methods, the proposed method improves the accuracy while the complexity is reduced in some cases. 相似文献
104.
Mohamed Cheriet Reza Farrahi Moghaddam Rachid Hedjam 《Computer Vision and Image Understanding》2013,117(3):269-280
Almost all binarization methods have a few parameters that require setting. However, they do not usually achieve their upper-bound performance unless the parameters are individually set and optimized for each input document image. In this work, a learning framework for the optimization of the binarization methods is introduced, which is designed to determine the optimal parameter values for a document image. The framework, which works with any binarization method, has a standard structure, and performs three main steps: (i) extracts features, (ii) estimates optimal parameters, and (iii) learns the relationship between features and optimal parameters. First, an approach is proposed to generate numerical feature vectors from 2D data. The statistics of various maps are extracted and then combined into a final feature vector, in a nonlinear way. The optimal behavior is learned using support vector regression (SVR). Although the framework works with any binarization method, two methods are considered as typical examples in this work: the grid-based Sauvola method, and Lu’s method, which placed first in the DIBCO’09 contest. The experiments are performed on the DIBCO’09 and H-DIBCO’10 datasets, and combinations of these datasets with promising results. 相似文献
105.
This paper investigates the use of time-adaptive self-organizing map (TASOM)-based active contour models (ACMs) for detecting the boundaries of the human eye sclera and tracking its movements in a sequence of images. The task begins with extracting the head boundary based on a skin-color model. Then the eye strip is located with an acceptable accuracy using a morphological method. Eye features such as the iris center or eye corners are detected through the iris edge information. TASOM-based ACM is used to extract the inner boundary of the eye. Finally, by tracking the changes in the neighborhood characteristics of the eye-boundary estimating neurons, the eyes are tracked effectively. The original TASOM algorithm is found to have some weaknesses in this application. These include formation of undesired twists in the neuron chain and holes in the boundary, lengthy chain of neurons, and low speed of the algorithm. These weaknesses are overcome by introducing a new method for finding the winning neuron, a new definition for unused neurons, and a new method of feature selection and application to the network. Experimental results show a very good performance for the proposed method in general and a better performance than that of the gradient vector field (GVF) snake-based method. 相似文献
106.
Aminollah Mahabadi Hamid Sarbazi-Azad Ebrahim Khodaie Keivan Navi 《The Journal of supercomputing》2008,45(1):1-14
This paper proposes an efficient parallel algorithm for computing Lagrange interpolation on k-ary n-cube networks. This is done using the fact that a k-ary n-cube can be decomposed into n link-disjoint Hamiltonian cycles. Using these n link-disjoint cycles, we interpolate Lagrange polynomial using full bandwidth of the employed network. Communication in the
main phase of the algorithm is based on an all-to-all broadcast algorithm on the n link-disjoint Hamiltonian cycles exploiting all network channels, and thus, resulting in high-efficiency in using network
resources. A performance evaluation of the proposed algorithm reveals an optimum speedup for a typical range of system parameters
used in current state-of-the-art implementations.
相似文献
Hamid Sarbazi-AzadEmail: Email: |
107.
Hamid Noori Farhad Mehdipour Kazuaki Murakami Koji Inoue Morteza Saheb Zamani 《The Journal of supercomputing》2008,45(3):313-340
To improve the performance of embedded processors, an effective technique is collapsing critical computation subgraphs as
application-specific instruction set extensions and executing them on custom functional units. The problem with this approach
is the immense cost and the long times required to design a new processor for each application. As a solution to this issue,
we propose an adaptive extensible processor in which custom instructions (CIs) are generated and added after chip-fabrication.
To support this feature, custom functional units are replaced by a reconfigurable matrix of functional units (FUs). A systematic
quantitative approach is used for determining the appropriate structure of the reconfigurable functional unit (RFU). We also
introduce an integrated framework for generating mappable CIs on the RFU. Using this architecture, performance is improved
by up to 1.33, with an average improvement of 1.16, compared to a 4-issue in-order RISC processor. By partitioning the configuration
memory, detecting similar/subset CIs and merging small CIs, the size of the configuration memory is reduced by 40%. 相似文献
108.
About 20 years ago, Markus and Robey noted that most research on IT impacts had been guided by deterministic perspectives and had neglected to use an emergent perspective, which could account for contradictory findings. They further observed that most research in this area had been carried out using variance theories at the expense of process theories. Finally, they suggested that more emphasis on multilevel theory building would likely improve empirical reliability. In this paper, we reiterate the observations and suggestions made by Markus and Robey on the causal structure of IT impact theories and carry out an analysis of empirical research published in four major IS journals, Management Information Systems Quarterly (MISQ), Information Systems Research (ISR), the European Journal of Information Systems (EJIS), and Information and Organization (I&O), to assess compliance with those recommendations. Our final sample consisted of 161 theory-driven articles, accounting for approximately 21% of all the empirical articles published in these journals. Our results first reveal that 91% of the studies in MISQ, ISR, and EJIS focused on deterministic theories, while 63% of those in I&O adopted an emergent perspective. Furthermore, 91% of the articles in MISQ, ISR, and EJIS adopted a variance model; this compares with 71% from I&O that applied a process model. Lastly, mixed levels of analysis were found in 14% of all the surveyed articles. Implications of these findings for future research are discussed. 相似文献
109.
A Randomized Algorithm for Online Unit Clustering 总被引:1,自引:0,他引:1
In this paper, we consider the online version of the following problem: partition a set of input points into subsets, each enclosable by a unit ball, so as to minimize the number of subsets used. In the one-dimensional case, we show that surprisingly the naïve upper bound of 2 on the competitive ratio can be beaten: we present a new randomized 15/8-competitive online algorithm. We also provide some lower bounds and an extension to higher dimensions. 相似文献
110.
Reza Samavi Eric Yu Thodoros Topaloglou 《Information Systems and E-Business Management》2009,7(2):171-198
Strategic reasoning about business models is an integral part of service design. In fast moving markets, businesses must be able to recognize and respond strategically to disruptive change. They have to answer questions such as: what are the threats and opportunities in emerging technologies and innovations? How should they target customer groups? Who are their real competitors? How will competitive battles take shape? In this paper we define a strategic modeling framework to help understand and analyze the goals, intentions, roles, and the rationale behind the strategic actions in a business environment. This understanding is necessary in order to improve existing or design new services. The key component of the framework is a strategic business model ontology for representing and analyzing business models and strategies, using the i* agent and goal oriented methodology as a basis. The ontology introduces a strategy layer which reasons about alternative strategies that are realized in the operational layer. The framework is evaluated using a retroactive example of disruptive technology in the telecommunication services sector from the literature. 相似文献