首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   96428篇
  免费   4262篇
  国内免费   4213篇
电工技术   4092篇
技术理论   5篇
综合类   8796篇
化学工业   13004篇
金属工艺   10764篇
机械仪表   3837篇
建筑科学   4329篇
矿业工程   1362篇
能源动力   2773篇
轻工业   5931篇
水利工程   1934篇
石油天然气   4103篇
武器工业   711篇
无线电   6830篇
一般工业技术   14424篇
冶金工业   2837篇
原子能技术   2176篇
自动化技术   16995篇
  2024年   66篇
  2023年   293篇
  2022年   448篇
  2021年   670篇
  2020年   1038篇
  2019年   952篇
  2018年   1082篇
  2017年   1080篇
  2016年   1594篇
  2015年   2270篇
  2014年   4296篇
  2013年   5042篇
  2012年   4466篇
  2011年   5106篇
  2010年   4295篇
  2009年   5738篇
  2008年   5677篇
  2007年   6222篇
  2006年   5709篇
  2005年   4774篇
  2004年   4139篇
  2003年   4053篇
  2002年   4046篇
  2001年   3111篇
  2000年   3440篇
  1999年   3073篇
  1998年   2560篇
  1997年   2481篇
  1996年   2588篇
  1995年   2713篇
  1994年   2451篇
  1993年   1487篇
  1992年   1515篇
  1991年   1034篇
  1990年   755篇
  1989年   673篇
  1988年   647篇
  1987年   376篇
  1986年   222篇
  1985年   369篇
  1984年   411篇
  1983年   429篇
  1982年   328篇
  1981年   404篇
  1980年   270篇
  1979年   114篇
  1978年   112篇
  1977年   69篇
  1976年   40篇
  1975年   55篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
91.
With the popularity of sensor-rich mobile devices, mobile crowdsensing (MCS) has emerged as an effective method for data collection and processing. However, MCS platform usually need workers’ precise locations for optimal task execution and collect sensing data from workers, which raises severe concerns of privacy leakage. Trying to preserve workers’ location and sensing data from the untrusted MCS platform, a differentially private data aggregation method based on worker partition and location obfuscation (DP-DAWL method) is proposed in the paper. DP-DAWL method firstly use an improved K-means algorithm to divide workers into groups and assign different privacy budget to the group according to group size (the number of workers). Then each worker’s location is obfuscated and his/her sensing data is perturbed by adding Laplace noise before uploading to the platform. In the stage of data aggregation, DP-DAWL method adopts an improved Kalman filter algorithm to filter out the added noise (including both added noise of sensing data and the system noise in the sensing process). Through using optimal estimation of noisy aggregated sensing data, the platform can finally gain better utility of aggregated data while preserving workers’ privacy. Extensive experiments on the synthetic datasets demonstrate the effectiveness of the proposed method.  相似文献   
92.
This paper examines the causal relationship between oil prices and the Gross Domestic Product (GDP) in the Kingdom of Saudi Arabia. The study is carried out by a data set collected quarterly, by Saudi Arabian Monetary Authority, over a period from 1974 to 2016. We seek how a change in real crude oil price affects the GDP of KSA. Based on a new technique, we treat this data in its continuous path. Precisely, we analyze the causality between these two variables, i.e., oil prices and GDP, by using their yearly curves observed in the four quarters of each year. We discuss the causality in the sense of Granger, which requires the stationarity of the data. Thus, in the first Step, we test the stationarity by using the Monte Carlo test of a functional time series stationarity. Our main goal is treated in the second step, where we use the functional causality idea to model the co-variability between these variables. We show that the two series are not integrated; there is one causality between these two variables. All the statistical analyzes were performed using R software.  相似文献   
93.
As an unsupervised learning method, stochastic competitive learning is commonly used for community detection in social network analysis. Compared with the traditional community detection algorithms, it has the advantage of realizing the timeseries community detection by simulating the community formation process. In order to improve the accuracy and solve the problem that several parameters in stochastic competitive learning need to be pre-set, the author improves the algorithms and realizes improved stochastic competitive learning by particle position initialization, parameter optimization and particle domination ability self-adaptive. The experiment result shows that each improved method improves the accuracy of the algorithm, and the F1 score of the improved algorithm is 9.07% higher than that of original algorithm.  相似文献   
94.
In this paper, the supervised Deep Neural Network (DNN) based signal detection is analyzed for combating with nonlinear distortions efficiently and improving error performances in clipping based Orthogonal Frequency Division Multiplexing (OFDM) ssystem. One of the main disadvantages for the OFDM is the high Peak to Average Power Ratio (PAPR). The clipping is a simple method for the PAPR reduction. However, an effect of the clipping is nonlinear distortion, and estimations for transmitting symbols are difficult despite a Maximum Likelihood (ML) detection at the receiver. The DNN based online signal detection uses the offline learning model where all weights and biases at fullyconnected layers are set to overcome nonlinear distortions by using training data sets. Thus, this paper introduces the required processes for the online signal detection and offline learning, and compares error performances with the ML detection in the clipping-based OFDM systems. In simulation results, the DNN based signal detection has better error performance than the conventional ML detection in multi-path fading wireless channel. The performance improvement is large as the complexity of system is increased such as huge Multiple Input Multiple Output (MIMO) system and high clipping rate.  相似文献   
95.
In reliability analysis, the stress-strength model is often used to describe the life of a component which has a random strength (X) and is subjected to a random stress (Y). In this paper, we considered the problem of estimating the reliability R=P [Y<X] when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution. The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample, ranked set sampling and median ranked set sampling methods. Four different reliability estimators under median ranked set sampling are derived. Two estimators are obtained when both strength and stress have an odd or an even set size. The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa. The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study. The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample. In general, the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.  相似文献   
96.
In this article, a new generalization of the inverse Lindley distribution is introduced based on Marshall-Olkin family of distributions. We call the new distribution, the generalized Marshall-Olkin inverse Lindley distribution which offers more flexibility for modeling lifetime data. The new distribution includes the inverse Lindley and the Marshall-Olkin inverse Lindley as special distributions. Essential properties of the generalized Marshall-Olkin inverse Lindley distribution are discussed and investigated including, quantile function, ordinary moments, incomplete moments, moments of residual and stochastic ordering. Maximum likelihood method of estimation is considered under complete, Type-I censoring and Type-II censoring. Maximum likelihood estimators as well as approximate confidence intervals of the population parameters are discussed. A comprehensive simulation study is done to assess the performance of estimates based on their biases and mean square errors. The notability of the generalized Marshall-Olkin inverse Lindley model is clarified by means of two real data sets. The results showed the fact that the generalized Marshall-Olkin inverse Lindley model can produce better fits than power Lindley, extended Lindley, alpha power transmuted Lindley, alpha power extended exponential and Lindley distributions.  相似文献   
97.
Due to its outstanding ability in processing large quantity and high-dimensional data, machine learning models have been used in many cases, such as pattern recognition, classification, spam filtering, data mining and forecasting. As an outstanding machine learning algorithm, K-Nearest Neighbor (KNN) has been widely used in different situations, yet in selecting qualified applicants for winning a funding is almost new. The major problem lies in how to accurately determine the importance of attributes. In this paper, we propose a Feature-weighted Gradient Decent K-Nearest Neighbor (FGDKNN) method to classify funding applicants in to two types: approved ones or not approved ones. The FGDKNN is based on a gradient decent learning algorithm to update weight. It updatesthe weight of labels by minimizing error ratio iteratively, so that the importance of attributes can be described better. We investigate the performance of FGDKNN with Beijing Innofund. The results show that FGDKNN performs about 23%, 20%, 18%, 15% better than KNN, SVM, DT and ANN, respectively. Moreover, the FGDKNN has fast convergence time under different training scales, and has good performance under different settings.  相似文献   
98.
Host cardinality estimation is an important research field in network management and network security. The host cardinality estimation algorithm based on the linear estimator array is a common method. Existing algorithms do not take memory footprint into account when selecting the number of estimators used by each host. This paper analyzes the relationship between memory occupancy and estimation accuracy and compares the effects of different parameters on algorithm accuracy. The cardinality estimating algorithm is a kind of random algorithm, and there is a deviation between the estimated results and the actual cardinalities. The deviation is affected by some systematical factors, such as the random parameters inherent in linear estimator and the random functions used to map a host to different linear estimators. These random factors cannot be reduced by merging multiple estimators, and existing algorithms cannot remove the deviation caused by such factors. In this paper, we regard the estimation deviation as a random variable and proposed a sampling method, recorded as the linear estimator array step sampling algorithm (L2S), to reduce the influence of the random deviation. L2S improves the accuracy of the estimated cardinalities by evaluating and remove the expected value of random deviation. The cardinality estimation algorithm based on the estimator array is a computationally intensive algorithm, which takes a lot of time when processing high-speed network data in a serial environment. To solve this problem, a method is proposed to port the cardinality estimating algorithm based on the estimator array to the Graphics Processing Unit (GPU). Experiments on real-world highspeed network traffic show that L2S can reduce the absolute bias by more than 22% on average, and the extra time is less than 61 milliseconds on average.  相似文献   
99.
Single image super resolution (SISR) is an important research content in the field of computer vision and image processing. With the rapid development of deep neural networks, different image super-resolution models have emerged. Compared to some traditional SISR methods, deep learning-based methods can complete the superresolution tasks through a single image. In addition, compared with the SISR methods using traditional convolutional neural networks, SISR based on generative adversarial networks (GAN) has achieved the most advanced visual performance. In this review, we first explore the challenges faced by SISR and introduce some common datasets and evaluation metrics. Then, we review the improved network structures and loss functions of GAN-based perceptual SISR. Subsequently, the advantages and disadvantages of different networks are analyzed by multiple comparative experiments. Finally, we summarize the paper and look forward to the future development trends of GAN-based perceptual SISR.  相似文献   
100.
Cyberattacks on the Industrial Control System (ICS) have recently been increasing, made more intelligent by advancing technologies. As such, cybersecurity for such systems is attracting attention. As a core element of control devices, the Programmable Logic Controller (PLC) in an ICS carries out on-site control over the ICS. A cyberattack on the PLC will cause damages on the overall ICS, with Stuxnet and Duqu as the most representative cases. Thus, cybersecurity for PLCs is considered essential, and many researchers carry out a variety of analyses on the vulnerabilities of PLCs as part of preemptive efforts against attacks. In this study, a vulnerability analysis was conducted on the XGB PLC. Security vulnerabilities were identified by analyzing the network protocols and memory structure of PLCs and were utilized to launch replay attack, memory modulation attack, and FTP/Web service account theft for the verification of the results. Based on the results, the attacks were proven to be able to cause the PLC to malfunction and disable it, and the identified vulnerabilities were defined.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号