首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   476篇
  免费   37篇
  国内免费   1篇
电工技术   5篇
综合类   1篇
化学工业   87篇
金属工艺   15篇
机械仪表   29篇
建筑科学   21篇
能源动力   38篇
轻工业   17篇
水利工程   14篇
石油天然气   12篇
无线电   47篇
一般工业技术   107篇
冶金工业   11篇
原子能技术   9篇
自动化技术   101篇
  2023年   8篇
  2022年   12篇
  2021年   28篇
  2020年   19篇
  2019年   35篇
  2018年   37篇
  2017年   36篇
  2016年   37篇
  2015年   20篇
  2014年   27篇
  2013年   61篇
  2012年   44篇
  2011年   31篇
  2010年   17篇
  2009年   23篇
  2008年   10篇
  2007年   13篇
  2006年   8篇
  2005年   6篇
  2004年   1篇
  2002年   1篇
  2001年   2篇
  2000年   1篇
  1999年   1篇
  1998年   4篇
  1997年   2篇
  1996年   3篇
  1995年   1篇
  1994年   2篇
  1993年   3篇
  1991年   1篇
  1990年   2篇
  1989年   2篇
  1988年   2篇
  1984年   1篇
  1983年   3篇
  1979年   2篇
  1977年   2篇
  1976年   3篇
  1974年   2篇
  1967年   1篇
排序方式: 共有514条查询结果,搜索用时 31 毫秒
1.
This paper presents an energy-efficient switching scheme for successive approximation register (SAR) analogue-to-digital converter (ADC). The proposed scheme employs charge recycling method to keep the capacitor arrays free of transitional energy between bit generations except reset phase. In comparison with the conventional switching scheme, the proposed one achieves 100% transitional energy saving without considering reset phase. In addition, configuration of a 10-bit SAR ADC shows that the proposed switching scheme reduces the capacitor area by 25% compared with the conventional switching scheme.  相似文献   
2.
Today, air pollution, smoking, use of fatty acids and ready‐made foods, and so on, have exacerbated heart disease. Therefore, controlling the risk of such diseases can prevent or reduce their incidence. The present study aimed at developing an integrated methodology including Markov decision processes (MDP) and genetic algorithm (GA) to control the risk of cardiovascular disease in patients with hypertension and type 1 diabetes. First, the efficiency of GA is evaluated against Grey Wolf optimization (GWO) algorithm, and then, the superiority of GA is revealed. Next, the MDP is employed to estimate the risk of cardiovascular disease. For this purpose, model inputs are first determined using a validated micro‐simulation model for screening cardiovascular disease developed at Tehran University of Medical Sciences, Iran by GA. The model input factors are then defined accordingly and using these inputs, three risk estimation models are identified. The results of these models support WHO guidelines that provide medicine with a high discount to patients with high expected LYs. To develop the MDP methodology, policies should be adopted that work well despite the difference between the risk model and the actual risk. Finally, a sensitivity analysis is conducted to study the behavior of the total medication cost against the changes of parameters.  相似文献   
3.
The Journal of Supercomputing - During recent years, big data explosion and the increase in main memory capacity, on the one hand, and the need for faster data processing, on the other hand, have...  相似文献   
4.
Steel shear wall(SSW) was properly determined using numerical and experimental approaches.The properties of SSW and LYP(low yield point) steel shear wall(LSSW) were measured.It is revealed that LSSW exhibits higher properties compared to SSW in both elastic and inelastic zones.It is also concluded that the addition of CFRP(carbon fiber reinforced polymers) enhances the seismic parameters of LSSW(stiffness,energy absorption,shear capacity,over strength values).Also,stress values applied to boundary frames are lower due to post buckling forces.The effect of fiber angle was also studied and presented as a mathematical equation.  相似文献   
5.
This paper presents a method for reconstructing unreliable spectral components of speech signals using the statistical distributions of the clean components. Our goal is to model the temporal patterns in speech signal and take advantage of correlations between speech features in both time and frequency domain simultaneously. In this approach, a hidden Markov model (HMM) is first trained on clean speech data to model the temporal patterns which appear in the sequences of the spectral components. Using this model and according to the probabilities of occurring noisy spectral component at each states, a probability distributions for noisy components are estimated. Then, by applying maximum a posteriori (MAP) estimation on the mentioned distributions, the final estimations of the unreliable spectral components are obtained. The proposed method is compared to a common missing feature method which is based on the probabilistic clustering of the feature vectors and also to a state of the art method based on sparse reconstruction. The experimental results exhibits significant improvement in recognition accuracy over a noise polluted Persian corpus.  相似文献   
6.
This paper proposes a novel multi-objective model for an unrelated parallel machine scheduling problem considering inherent uncertainty in processing times and due dates. The problem is characterized by non-zero ready times, sequence and machine-dependent setup times, and secondary resource constraints for jobs. Each job can be processed only if its required machine and secondary resource (if any) are available at the same time. Finding optimal solution for this complex problem in a reasonable time using exact optimization tools is prohibitive. This paper presents an effective multi-objective particle swarm optimization (MOPSO) algorithm to find a good approximation of Pareto frontier where total weighted flow time, total weighted tardiness, and total machine load variation are to be minimized simultaneously. The proposed MOPSO exploits new selection regimes for preserving global as well as personal best solutions. Moreover, a generalized dominance concept in a fuzzy environment is employed to find locally Pareto-optimal frontier. Performance of the proposed MOPSO is compared against a conventional multi-objective particle swarm optimization (CMOPSO) algorithm over a number of randomly generated test problems. Statistical analyses based on the effect of each algorithm on each objective space show that the proposed MOPSO outperforms the CMOPSO in terms of quality, diversity and spacing metrics.  相似文献   
7.
8.
Most of the existing classification methods, used for voice pathology assessment, are built based on labeled pathological and normal voice signals. This paper studies the problem of building a classifier using labeled and unlabeled data. We propose a novel learning technique, called Partitioning and Biased Support Vector Machine Classification (PBSVM), which tries to utilize all the available data in two steps: (1) a new heuristically partition-based algorithm, which extracts high quality pathological and normal samples from an unlabeled set, and (2) a more principle approach based on biased formulation of support vector machine, which is fairly robust to mislabeling and unbalance data problem. Experiments with wavelet-based energy features extracted from sustained vowels show that the new recognition scheme is highly feasible and significantly outperform the baseline classical SVM classifier, especially in the situation where the labeled training data is small.  相似文献   
9.
Communication overhead is the key obstacle to reaching hardware performance limits. The majority is associated with software overhead, a significant portion of which is attributed to message copying. To reduce this copying overhead, we have devised techniques that do not require to copy a received message in order for it to be bound to its final destination. Rather, a late-binding mechanism, which involves address translation and a dedicated cache, facilitates fast access to received messages by the consuming process/thread.We have introduced two policies namely Direct to Cache Transfer (DTCT) and lazy DTCT that determine whether a message after it is bound needs to be transferred into the data cache. We have studied the proposed methods in simulation and have shown their effectiveness in reducing access times to message payloads by the consuming process.  相似文献   
10.
Data co-clustering refers to the problem of simultaneous clustering of two data types. Typically, the data is stored in a contingency or co-occurrence matrix C where rows and columns of the matrix represent the data types to be co-clustered. An entry C ij of the matrix signifies the relation between the data type represented by row i and column j. Co-clustering is the problem of deriving sub-matrices from the larger data matrix by simultaneously clustering rows and columns of the data matrix. In this paper, we present a novel graph theoretic approach to data co-clustering. The two data types are modeled as the two sets of vertices of a weighted bipartite graph. We then propose Isoperimetric Co-clustering Algorithm (ICA)—a new method for partitioning the bipartite graph. ICA requires a simple solution to a sparse system of linear equations instead of the eigenvalue or SVD problem in the popular spectral co-clustering approach. Our theoretical analysis and extensive experiments performed on publicly available datasets demonstrate the advantages of ICA over other approaches in terms of the quality, efficiency and stability in partitioning the bipartite graph.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号