首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   88538篇
  免费   4001篇
  国内免费   4102篇
电工技术   4009篇
技术理论   5篇
综合类   8442篇
化学工业   12586篇
金属工艺   5625篇
机械仪表   3002篇
建筑科学   4267篇
矿业工程   1246篇
能源动力   2736篇
轻工业   5799篇
水利工程   1929篇
石油天然气   4085篇
武器工业   687篇
无线电   6700篇
一般工业技术   13984篇
冶金工业   2457篇
原子能技术   2174篇
自动化技术   16908篇
  2024年   63篇
  2023年   259篇
  2022年   371篇
  2021年   566篇
  2020年   970篇
  2019年   922篇
  2018年   1038篇
  2017年   962篇
  2016年   1477篇
  2015年   2111篇
  2014年   3887篇
  2013年   4636篇
  2012年   3933篇
  2011年   4575篇
  2010年   3846篇
  2009年   5234篇
  2008年   5242篇
  2007年   5588篇
  2006年   5129篇
  2005年   4293篇
  2004年   3706篇
  2003年   3632篇
  2002年   3681篇
  2001年   2757篇
  2000年   3143篇
  1999年   2930篇
  1998年   2465篇
  1997年   2349篇
  1996年   2503篇
  1995年   2648篇
  1994年   2403篇
  1993年   1459篇
  1992年   1485篇
  1991年   1021篇
  1990年   747篇
  1989年   664篇
  1988年   632篇
  1987年   371篇
  1986年   222篇
  1985年   369篇
  1984年   411篇
  1983年   429篇
  1982年   328篇
  1981年   404篇
  1980年   270篇
  1979年   114篇
  1978年   112篇
  1977年   69篇
  1976年   40篇
  1975年   55篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
91.
In reliability analysis, the stress-strength model is often used to describe the life of a component which has a random strength (X) and is subjected to a random stress (Y). In this paper, we considered the problem of estimating the reliability R=P [Y<X] when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution. The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample, ranked set sampling and median ranked set sampling methods. Four different reliability estimators under median ranked set sampling are derived. Two estimators are obtained when both strength and stress have an odd or an even set size. The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa. The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study. The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample. In general, the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.  相似文献   
92.
In this article, a new generalization of the inverse Lindley distribution is introduced based on Marshall-Olkin family of distributions. We call the new distribution, the generalized Marshall-Olkin inverse Lindley distribution which offers more flexibility for modeling lifetime data. The new distribution includes the inverse Lindley and the Marshall-Olkin inverse Lindley as special distributions. Essential properties of the generalized Marshall-Olkin inverse Lindley distribution are discussed and investigated including, quantile function, ordinary moments, incomplete moments, moments of residual and stochastic ordering. Maximum likelihood method of estimation is considered under complete, Type-I censoring and Type-II censoring. Maximum likelihood estimators as well as approximate confidence intervals of the population parameters are discussed. A comprehensive simulation study is done to assess the performance of estimates based on their biases and mean square errors. The notability of the generalized Marshall-Olkin inverse Lindley model is clarified by means of two real data sets. The results showed the fact that the generalized Marshall-Olkin inverse Lindley model can produce better fits than power Lindley, extended Lindley, alpha power transmuted Lindley, alpha power extended exponential and Lindley distributions.  相似文献   
93.
Due to its outstanding ability in processing large quantity and high-dimensional data, machine learning models have been used in many cases, such as pattern recognition, classification, spam filtering, data mining and forecasting. As an outstanding machine learning algorithm, K-Nearest Neighbor (KNN) has been widely used in different situations, yet in selecting qualified applicants for winning a funding is almost new. The major problem lies in how to accurately determine the importance of attributes. In this paper, we propose a Feature-weighted Gradient Decent K-Nearest Neighbor (FGDKNN) method to classify funding applicants in to two types: approved ones or not approved ones. The FGDKNN is based on a gradient decent learning algorithm to update weight. It updatesthe weight of labels by minimizing error ratio iteratively, so that the importance of attributes can be described better. We investigate the performance of FGDKNN with Beijing Innofund. The results show that FGDKNN performs about 23%, 20%, 18%, 15% better than KNN, SVM, DT and ANN, respectively. Moreover, the FGDKNN has fast convergence time under different training scales, and has good performance under different settings.  相似文献   
94.
Host cardinality estimation is an important research field in network management and network security. The host cardinality estimation algorithm based on the linear estimator array is a common method. Existing algorithms do not take memory footprint into account when selecting the number of estimators used by each host. This paper analyzes the relationship between memory occupancy and estimation accuracy and compares the effects of different parameters on algorithm accuracy. The cardinality estimating algorithm is a kind of random algorithm, and there is a deviation between the estimated results and the actual cardinalities. The deviation is affected by some systematical factors, such as the random parameters inherent in linear estimator and the random functions used to map a host to different linear estimators. These random factors cannot be reduced by merging multiple estimators, and existing algorithms cannot remove the deviation caused by such factors. In this paper, we regard the estimation deviation as a random variable and proposed a sampling method, recorded as the linear estimator array step sampling algorithm (L2S), to reduce the influence of the random deviation. L2S improves the accuracy of the estimated cardinalities by evaluating and remove the expected value of random deviation. The cardinality estimation algorithm based on the estimator array is a computationally intensive algorithm, which takes a lot of time when processing high-speed network data in a serial environment. To solve this problem, a method is proposed to port the cardinality estimating algorithm based on the estimator array to the Graphics Processing Unit (GPU). Experiments on real-world highspeed network traffic show that L2S can reduce the absolute bias by more than 22% on average, and the extra time is less than 61 milliseconds on average.  相似文献   
95.
Single image super resolution (SISR) is an important research content in the field of computer vision and image processing. With the rapid development of deep neural networks, different image super-resolution models have emerged. Compared to some traditional SISR methods, deep learning-based methods can complete the superresolution tasks through a single image. In addition, compared with the SISR methods using traditional convolutional neural networks, SISR based on generative adversarial networks (GAN) has achieved the most advanced visual performance. In this review, we first explore the challenges faced by SISR and introduce some common datasets and evaluation metrics. Then, we review the improved network structures and loss functions of GAN-based perceptual SISR. Subsequently, the advantages and disadvantages of different networks are analyzed by multiple comparative experiments. Finally, we summarize the paper and look forward to the future development trends of GAN-based perceptual SISR.  相似文献   
96.
Cyberattacks on the Industrial Control System (ICS) have recently been increasing, made more intelligent by advancing technologies. As such, cybersecurity for such systems is attracting attention. As a core element of control devices, the Programmable Logic Controller (PLC) in an ICS carries out on-site control over the ICS. A cyberattack on the PLC will cause damages on the overall ICS, with Stuxnet and Duqu as the most representative cases. Thus, cybersecurity for PLCs is considered essential, and many researchers carry out a variety of analyses on the vulnerabilities of PLCs as part of preemptive efforts against attacks. In this study, a vulnerability analysis was conducted on the XGB PLC. Security vulnerabilities were identified by analyzing the network protocols and memory structure of PLCs and were utilized to launch replay attack, memory modulation attack, and FTP/Web service account theft for the verification of the results. Based on the results, the attacks were proven to be able to cause the PLC to malfunction and disable it, and the identified vulnerabilities were defined.  相似文献   
97.
Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, overcoming the weaknesses of conventional phrase-based translation systems. Although NMT based systems have gained their popularity in commercial translation applications, there is still plenty of room for improvement. Being the most popular search algorithm in NMT, beam search is vital to the translation result. However, traditional beam search can produce duplicate or missing translation due to its target sequence selection strategy. Aiming to alleviate this problem, this paper proposed neural machine translation improvements based on a novel beam search evaluation function. And we use reinforcement learning to train a translation evaluation system to select better candidate words for generating translations. In the experiments, we conducted extensive experiments to evaluate our methods. CASIA corpus and the 1,000,000 pairs of bilingual corpora of NiuTrans are used in our experiments. The experiment results prove that the proposed methods can effectively improve the English to Chinese translation quality.  相似文献   
98.
The occurrence of perioperative heart failure will affect the quality of medical services and threaten the safety of patients. Existing methods depend on the judgment of doctors, the results are affected by many factors such as doctors’ knowledge and experience. The accuracy is difficult to guarantee and has a serious lag. In this paper, a mixture prediction model is proposed for perioperative adverse events of heart failure, which combined with the advantages of the Deep Pyramid Convolutional Neural Networks (DPCNN) and Extreme Gradient Boosting (XGBOOST). The DPCNN was used to automatically extract features from patient’s diagnostic texts, and the text features were integrated with the preoperative examination and intraoperative monitoring values of patients, then the XGBOOST algorithm was used to construct the prediction model of heart failure. An experimental comparison was conducted on the model based on the data of patients with heart failure in southwest hospital from 2014 to 2018. The results showed that the DPCNN-XGBOOST model improved the predictive sensitivity of the model by 3% and 31% compared with the text-based DPCNN Model and the numeric-based XGBOOST Model.  相似文献   
99.
The Global Positioning System (GPS) offers the interferometer for attitude determination by processing the carrier phase observables. By using carrier phase observables, the relative positioning is obtained in centimeter level. GPS interferometry has been firstly used in precise static relative positioning, and thereafter in kinematic positioning. The carrier phase differential GPS based on interferometer principles can solve for the antenna baseline vector, defined as the vector between the antenna designated master and one of the slave antennas, connected to a rigid body. Determining the unknown baseline vectors between the antennas sits at the heart of GPS-based attitude determination. The conventional solution of the baseline vectors based on least-squares approach is inherently noisy, which results in the noisy attitude solutions. In this article, the complementary Kalman filter (CKF) is employed for solving the baseline vector in the attitude determination mechanism to improve the performance, where the receiversatellite double differenced observable was utilized as the measurement. By using the carrier phase observables, the relative positioning is obtained in centimeter level. Employing the CKF provides several advantages, such as accuracy improvement, reliability enhancement, and real-time assurance. Simulation results based on the conventional method where the least-squares approach is involved, and the proposed method where the CKF is involved are compared and discussed.  相似文献   
100.
Agricultural culture is a productive activity about education and management. It aims at high efficiency and high quality, uses technology as its means, and takes nature as its carrier. Agricultural cultural resources are the product of the rapid development of modern economy. It promotes the development of the national economy and profoundly affects people's production and life. DEA model, also known as data envelope analysis method, is an algorithm that uses multiple data decision units for input and output training to obtain the final model. This article explains the concept and basic characteristics of agricultural culture. Through questionnaire surveys and expert interviews, we collected development data, screened human, material, and financial data, and calculated information on economic and social resources. On this basis, this paper establishes the evaluation index of agricultural culture based on DEA model. Then, through empirical analysis from a specific perspective, it can be concluded that increasing human, material and financial input can achieve economic and social benefits. Generally speaking, cultural investment can promote the development of the industry. The research results of this paper laid a theoretical foundation for the development of agricultural culture, and put forward a development model focusing on technology development, improving investment efficiency, and investing in material resources.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号