首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4305篇
  免费   217篇
  国内免费   26篇
电工技术   74篇
综合类   10篇
化学工业   1107篇
金属工艺   102篇
机械仪表   118篇
建筑科学   120篇
矿业工程   8篇
能源动力   328篇
轻工业   368篇
水利工程   40篇
石油天然气   53篇
无线电   462篇
一般工业技术   758篇
冶金工业   228篇
原子能技术   47篇
自动化技术   725篇
  2024年   10篇
  2023年   67篇
  2022年   189篇
  2021年   239篇
  2020年   182篇
  2019年   212篇
  2018年   267篇
  2017年   225篇
  2016年   236篇
  2015年   154篇
  2014年   210篇
  2013年   427篇
  2012年   224篇
  2011年   253篇
  2010年   203篇
  2009年   186篇
  2008年   146篇
  2007年   95篇
  2006年   107篇
  2005年   76篇
  2004年   74篇
  2003年   57篇
  2002年   59篇
  2001年   32篇
  2000年   31篇
  1999年   22篇
  1998年   75篇
  1997年   51篇
  1996年   53篇
  1995年   33篇
  1994年   32篇
  1993年   31篇
  1992年   29篇
  1991年   22篇
  1990年   16篇
  1989年   25篇
  1988年   17篇
  1987年   16篇
  1986年   10篇
  1985年   16篇
  1984年   11篇
  1983年   11篇
  1982年   13篇
  1981年   14篇
  1980年   10篇
  1979年   11篇
  1978年   9篇
  1977年   11篇
  1976年   22篇
  1972年   6篇
排序方式: 共有4548条查询结果,搜索用时 15 毫秒
21.

The edge computing model offers an ultimate platform to support scientific and real-time workflow-based applications over the edge of the network. However, scientific workflow scheduling and execution still facing challenges such as response time management and latency time. This leads to deal with the acquisition delay of servers, deployed at the edge of a network and reduces the overall completion time of workflow. Previous studies show that existing scheduling methods consider the static performance of the server and ignore the impact of resource acquisition delay when scheduling workflow tasks. Our proposed method presented a meta-heuristic algorithm to schedule the scientific workflow and minimize the overall completion time by properly managing the acquisition and transmission delays. We carry out extensive experiments and evaluations based on commercial clouds and various scientific workflow templates. The proposed method has approximately 7.7% better performance than the baseline algorithms, particularly in overall deadline constraint that gives a success rate.

  相似文献   
22.
In this paper, a hybrid method is proposed for multi-channel electroencephalograms (EEG) signal compression. This new method takes advantage of two different compression techniques: fractal and wavelet-based coding. First, an effective decorrelation is performed through the principal component analysis of different channels to efficiently compress the multi-channel EEG data. Then, the decorrelated EEG signal is decomposed using wavelet packet transform (WPT). Finally, fractal encoding is applied to the low frequency coefficients of WPT, and a modified wavelet-based coding is used for coding the remaining high frequency coefficients. This new method provides improved compression results as compared to the wavelet and fractal compression methods.  相似文献   
23.
One great challenge in wireless communication systems is to ensure reliable communications. Turbo codes are known by their interesting capabilities to deal with transmission errors. In this paper, we present a novel turbo decoding scheme based on soft combining principle. Our method improves decoding performance using soft combining technique inside the turbo decoder. Working on Max-Log-Maximum a Posteriori (Max-Log-MAP) turbo decoding algorithm and using an Additive White Gaussian Noise (AWGN) channel model and 16 Quadrature Amplitude Modulation (16QAM), simulation results show that the suggested solution is efficient and outperforms the conventional Max-Log-MAP algorithm in terms of Bit Error Rate (BER). The performance analysis is carried out in terms of BER by varying parameters such as the Energy per bit to Noise power spectral density ratio ( \(\text {E}_{\text {b}}/\text {N}_{\text {o}}\) ), and decoding iterations number. We call our proposed solution Soft Combined Turbo Codes.  相似文献   
24.
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton’s method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton’s method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.  相似文献   
25.
Favorable properties of aqueous solutions are improved with the addition of different materials for separation of hydrogen sulfide (H2S). Also, equilibrium data and available equations for solubility estimation of this gas are only valid for specific solutions and limited ranges of temperature and pressure. In this regard, a machine learning model based on Support vector machine (SVM) algorithm is proposed and developed with mixtures containing different amines and ionic liquids to predict H2S solubility over wide ranges of temperature (298–434.5 K), pressure (13–9319 kPa), overall mass concentration (3.82–100%) and mixture's apparent molecular weight (18.39–556.17 g/mol). The accuracy of the performance of this network was evaluated by regression analysis on calculated and experimental data, which had not been used in network training.  相似文献   
26.
Hershey  J.E. Molnar  K. Hassan  A. 《Electronics letters》1992,28(18):1721-1722
The authors suggest that effective algorithms for spectrum search, such as those used for detecting spread spectrum signals, may be derived by selecting suboptimal algorithms and then recovering some of the lost efficacy through parallelisation methods. This thesis is motivated by considering a simple yet meaningful example of a spectrum search technique that exhibits what at first may seem to be counterintuitive behaviour.<>  相似文献   
27.
To comply with the stringent environmental regulations concerning the quality of fuels the production of ultra low sulfur fuels is obligatory. Consequently, the removal of aromatics from fuels has turned to be a serious issue. This is due to the fact that the presence of aromatics in fuel deters the ultra-low sulfur fuel production. Therefore the researcher’s interest has involved the dearomatization of fuels. As a result of the dearomatization, the quality of fuels improves tremendously. Here, solvent extraction was performed to dearomatize a feedstock sample with 20.1% aromatic and 166 ppm sulfur using acetonitrile. The extraction was performed at low temperature and ambient atmospheric pressure. The aromatic contents were determined via HPLC, while the ASTM methods were employed in other parameters determination. The results showed 72% minimum yield, 8.6% aromatic content, 58–64 cetane index, 73.2 ppm sulfur content, 5.4 viscosity, RI 1.4535, aniline point 82.15, specific gravity 0.824–0.812 with API 40.32–42.88 and flash point 70–78°C. The boiling range of the produced diesel fraction raffinate (172–373°C) that corresponds to C8–C24 cuts render it a potential candidate for other petrochemical applications.  相似文献   
28.
A simple selective erasure forward error correction (FEC) technique for differential phase shift keyed (DPSK) data is presented. The method provides a modest coding gain and requires very little overhead and computation. It is very similar to the Wagner code and may be useful as a stand alone technique in some applications and as a preconditioner to more sophisticated FEC techniques in others  相似文献   
29.
Throughout the subsurface of the Arabian Peninsula, the approximately 460ft thick, Devonian Jauf Formation generally consists of well-compacted, low-porosity sandstones and shales, but it also includes friable and highly porous sandstones which form significant gas and condensate reservoir intervals. The mineralogy and pore properties of these reservoir intervals at the Hawiyah field (part of the giant Ghawar structure) were studied by integrating petrographic data with petrophysical measurements of reservoir sandstone samples.
The reservoir sandstones are mainly composed of quartz arenites containing small amounts of altered potassium feldspar grains, authigenic illite and chlorite. Based on the pore types, which reflect the habits of the intergranular clays, three reservoir sandstone types have been defined: Type A, characterized by macroporosity; Type B, with microporosity; and Type C, with combined laminations of Types A and B. The dominance of pore-lining clay (as in Type A) or pore-filling clay (as in Type B) is the principal factor controlling the petrophysical properties of the samples. Types A and C sandstones contain macro pores, but irreducible water saturation is high (25 to 45%) compared to clean samples elsewhere, because of the presence of micropores associated with clay. In Type B sandstones the irreducible water saturation is commonly greater than 40% because all the pores spaces are in the microporosity range. The irreducible water saturation in Type B sandstones increases rapidly as porosity decreases. When porosity is less than 10%, the corresponding permeability is 0.2 mD, but no economic production can be expected because water saturation is as high as 100%. In the producing intervals, authigenic clays result in low electrical resistivity due to high water saturation; however, water-free gas is produced.  相似文献   
30.
A scenario-based reliability analysis approach for component-based software   总被引:1,自引:0,他引:1  
This paper introduces a reliability model, and a reliability analysis technique for component-based software. The technique is named Scenario-Based Reliability Analysis (SBRA). Using scenarios of component interactions, we construct a probabilistic model named Component-Dependency Graph (CDG). Based on CDG, a reliability analysis algorithm is developed to analyze the reliability of the system as a function of reliabilities of its architectural constituents. An extension of the proposed model and algorithm is also developed for distributed software systems. The proposed approach has the following benefits: 1) It is used to analyze the impact of variations and uncertainties in the reliability of individual components, subsystems, and links between components on the overall reliability estimate of the software system. This is particularly useful when the system is built partially or fully from existing off-the-shelf components; 2) It is suitable for analyzing the reliability of distributed software systems because it incorporates link and delivery channel reliabilities; 3) The technique is used to identify critical components, interfaces, and subsystems; and to investigate the sensitivity of the application reliability to these elements; 4) The approach is applicable early in the development lifecycle, at the architecture level. Early detection of critical architecture elements, those that affect the overall reliability of the system the most, is useful in delegating resources in later development phases.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号