首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11012篇
  免费   1574篇
  国内免费   1110篇
电工技术   846篇
综合类   1424篇
化学工业   868篇
金属工艺   317篇
机械仪表   623篇
建筑科学   798篇
矿业工程   389篇
能源动力   402篇
轻工业   1014篇
水利工程   602篇
石油天然气   412篇
武器工业   121篇
无线电   626篇
一般工业技术   1182篇
冶金工业   347篇
原子能技术   60篇
自动化技术   3665篇
  2024年   78篇
  2023年   291篇
  2022年   549篇
  2021年   564篇
  2020年   580篇
  2019年   519篇
  2018年   466篇
  2017年   516篇
  2016年   534篇
  2015年   541篇
  2014年   702篇
  2013年   873篇
  2012年   831篇
  2011年   913篇
  2010年   647篇
  2009年   637篇
  2008年   643篇
  2007年   692篇
  2006年   557篇
  2005年   438篇
  2004年   347篇
  2003年   284篇
  2002年   258篇
  2001年   215篇
  2000年   160篇
  1999年   123篇
  1998年   123篇
  1997年   113篇
  1996年   96篇
  1995年   71篇
  1994年   52篇
  1993年   59篇
  1992年   38篇
  1991年   41篇
  1990年   32篇
  1989年   22篇
  1988年   16篇
  1987年   10篇
  1986年   19篇
  1985年   4篇
  1984年   11篇
  1983年   4篇
  1982年   4篇
  1981年   3篇
  1980年   7篇
  1966年   2篇
  1961年   3篇
  1960年   1篇
  1959年   1篇
  1957年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
This paper presents a novel No-Reference Video Quality Assessment (NR-VQA) model that utilizes proposed 3D steerable wavelet transform-based Natural Video Statistics (NVS) features as well as human perceptual features. Additionally, we proposed a novel two-stage regression scheme that significantly improves the overall performance of quality estimation. In the first stage, transform-based NVS and human perceptual features are separately passed through the proposed hybrid regression scheme: Support Vector Regression (SVR) followed by Polynomial curve fitting. The two visual quality scores predicted from the first stage are then used as features for the similar second stage. This predicts the final quality scores of distorted videos by achieving score level fusion. Extensive experiments were conducted using five authentic and four synthetic distortion databases. Experimental results demonstrate that the proposed method outperforms other published state-of-the-art benchmark methods on synthetic distortion databases and is among the top performers on authentic distortion databases. The source code is available at https://github.com/anishVNIT/two-stage-vqa.  相似文献   
2.
Reliable prediction of flooding conditions is needed for sizing and operating packed extraction columns. Due to the complex interplay of physicochemical properties, operational parameters and the packing-specific properties, it is challenging to develop accurate semi-empirical or rigorous models with a high validity range. State of the art models may therefore fail to predict flooding accurately. To overcome this problem, a data-driven model based on Gaussian processes is developed to predict flooding for packed liquid-liquid and high-pressure extraction columns. The optimized Gaussian process for the liquid-liquid extraction column results in an average absolute relative error (AARE) of 15.23 %, whereas the algorithm for the high-pressure extraction column results in an AARE of 13.68 %. Both algorithms can predict flooding curves for different packing geometries and chemical systems precisely.  相似文献   
3.
Abstract

Data mining techniques have been successfully utilized in different applications of significant fields, including medical research. With the wealth of data available within the health-care systems, there is a lack of practical analysis tools to discover hidden relationships and trends in data. The complexity of medical data that is unfavorable for most models is a considerable challenge in prediction. The ability of a model to perform accurately and efficiently in disease diagnosis is extremely significant. Thus, the model must be selected to fit the data better, such that the learning from previous data is most efficient, and the diagnosis of the disease is highly accurate. This work is motivated by the limited number of regression analysis tools for multivariate counts in the literature. We propose two regression models for count data based on flexible distributions, namely, the multinomial Beta-Liouville and multinomial scaled Dirichlet, and evaluated the proposed models in the problem of disease diagnosis. The performance is evaluated based on the accuracy of the prediction which depends on the nature and complexity of the dataset. Our results show the efficiency of the two proposed regression models where the prediction performance of both models is competitive to other previously used regression models for count data and to the best results in the literature.  相似文献   
4.
Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50%-80%) is used for training and the rest—for validation. In many problems, however, the data are highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesizing feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesize data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesizing data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, for example, support vector machines, k-nearest neighbour classifiers deep neural, rule-based classifiers, decision trees, and so forth. The results demonstrated that (a) a significantly more balanced (and fair) classification results can be achieved and (b) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning.  相似文献   
5.
Clinical narratives such as progress summaries, lab reports, surgical reports, and other narrative texts contain key biomarkers about a patient's health. Evidence-based preventive medicine needs accurate semantic and sentiment analysis to extract and classify medical features as the input to appropriate machine learning classifiers. However, the traditional approach of using single classifiers is limited by the need for dimensionality reduction techniques, statistical feature correlation, a faster learning rate, and the lack of consideration of the semantic relations among features. Hence, extracting semantic and sentiment-based features from clinical text and combining multiple classifiers to create an ensemble intelligent system overcomes many limitations and provides a more robust prediction outcome. The selection of an appropriate approach and its interparameter dependency becomes key for the success of the ensemble method. This paper proposes a hybrid knowledge and ensemble learning framework for prediction of venous thromboembolism (VTE) diagnosis consisting of the following components: a VTE ontology, semantic extraction and sentiment assessment of risk factor framework, and an ensemble classifier. Therefore, a component-based analysis approach was adopted for evaluation using a data set of 250 clinical narratives where knowledge and ensemble achieved the following results with and without semantic extraction and sentiment assessment of risk factor, respectively: a precision of 81.8% and 62.9%, a recall of 81.8% and 57.6%, an F measure of 81.8% and 53.8%, and a receiving operating characteristic of 80.1% and 58.5% in identifying cases of VTE.  相似文献   
6.
Main challenges for developing data-based models lie in the existence of high-dimensional and possibly missing observations that exist in stored data from industry process. Variational autoencoder (VAE) as one of the deep learning methods has been applied for extracting useful information or features from high-dimensional dataset. Considering that existing VAE is unsupervised, an output-relevant VAE is proposed for extracting output-relevant features in this work. By using correlation between process variables, different weight is correspondingly assigned to each input variable. With symmetric Kullback–Leibler (SKL) divergence, the similarity is evaluated between the stored samples and a query sample. According to the values of the SKL divergence, data relevant for modeling are selected. Subsequently, Gaussian process regression (GPR) is utilized to establish a model between the input and the corresponding output at the query sample. In addition, owing to the common existence of missing data in output data set, the parameters and missing data in the GPR are estimated simultaneously. A practical debutanizer industrial process is utilized to illustrate the effectiveness of the proposed method.  相似文献   
7.
Although greedy algorithms possess high efficiency, they often receive suboptimal solutions of the ensemble pruning problem, since their exploration areas are limited in large extent. And another marked defect of almost all the currently existing ensemble pruning algorithms, including greedy ones, consists in: they simply abandon all of the classifiers which fail in the competition of ensemble selection, causing a considerable waste of useful resources and information. Inspired by these observations, an interesting greedy Reverse Reduce-Error (RRE) pruning algorithm incorporated with the operation of subtraction is proposed in this work. The RRE algorithm makes the best of the defeated candidate networks in a way that, the Worst Single Model (WSM) is chosen, and then, its votes are subtracted from the votes made by those selected components within the pruned ensemble. The reason is because, for most cases, the WSM might make mistakes in its estimation for the test samples. And, different from the classical RE, the near-optimal solution is produced based on the pruned error of all the available sequential subensembles. Besides, the backfitting step of RE algorithm is replaced with the selection step of a WSM in RRE. Moreover, the problem of ties might be solved more naturally with RRE. Finally, soft voting approach is employed in the testing to RRE algorithm. The performances of RE and RRE algorithms, and two baseline methods, i.e., the method which selects the Best Single Model (BSM) in the initial ensemble, and the method which retains all member networks of the initial ensemble (ALL), are evaluated on seven benchmark classification tasks under different initial ensemble setups. The results of the empirical investigation show the superiority of RRE over the other three ensemble pruning algorithms.  相似文献   
8.
Bile acids have been reported as important cofactors promoting human and murine norovirus (NoV) infections in cell culture. The underlying mechanisms are not resolved. Through the use of chemical shift perturbation (CSP) NMR experiments, we identified a low-affinity bile acid binding site of a human GII.4 NoV strain. Long-timescale MD simulations reveal the formation of a ligand-accessible binding pocket of flexible shape, allowing the formation of stable viral coat protein–bile acid complexes in agreement with experimental CSP data. CSP NMR experiments also show that this mode of bile acid binding has a minor influence on the binding of histo-blood group antigens and vice versa. STD NMR experiments probing the binding of bile acids to virus-like particles of seven different strains suggest that low-affinity bile acid binding is a common feature of human NoV and should therefore be important for understanding the role of bile acids as cofactors in NoV infection.  相似文献   
9.
Solubility is one of the most indispensable physicochemical properties determining the compatibility of components of a blending system. Research has been focused on the solubility of carbon dioxide in polymers as a significant application of green chemistry. To replace costly and time-consuming experiments, a novel solubility prediction model based on a decision tree, called the stochastic gradient boosting algorithm, was proposed to predict CO2 solubility in 13 different polymers, based on 515 published experimental data lines. The results indicate that the proposed ensemble model is an effective method for predicting the CO2 solubility in various polymers, with highly satisfactory performance and high efficiency. It produces more accurate outputs than other methods such as machine learning schemes and an equation of state approach.  相似文献   
10.
In this study, uniaxial compressive strength (UCS), unit weight (UW), Brazilian tensile strength (BTS), Schmidt hardness (SHH), Shore hardness (SSH), point load index (Is50) and P-wave velocity (Vp) properties were determined. To predict the UCS, simple regression (SRA), multiple regression (MRA), artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS) and genetic expression programming (GEP) have been utilized. The obtained UCS values were compared with the actual UCS values with the help of various graphs. Datasets were modeled using different methods and compared with each other. In the study where the performance indice PIat was used to determine the best performing method, MRA method is the most successful method with a small difference. It is concluded that the mean PIat equal to 2.46 for testing dataset suggests the superiority of the MRA, while these values are 2.44, 2.33, and 2.22 for GEP, ANFIS, and ANN techniques, respectively. The results pointed out that the MRA can be used for predicting UCS of rocks with higher capacity in comparison with others. According to the performance index assessment, the weakest model among the nine model is P7, while the most successful models are P2, P9, and P8, respectively.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号