首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3257篇
  免费   86篇
  国内免费   20篇
电工技术   31篇
综合类   18篇
化学工业   224篇
金属工艺   43篇
机械仪表   34篇
建筑科学   45篇
矿业工程   6篇
能源动力   56篇
轻工业   93篇
水利工程   22篇
石油天然气   22篇
无线电   200篇
一般工业技术   241篇
冶金工业   2141篇
原子能技术   8篇
自动化技术   179篇
  2024年   7篇
  2023年   23篇
  2022年   46篇
  2021年   76篇
  2020年   55篇
  2019年   45篇
  2018年   58篇
  2017年   46篇
  2016年   49篇
  2015年   29篇
  2014年   45篇
  2013年   85篇
  2012年   54篇
  2011年   73篇
  2010年   48篇
  2009年   61篇
  2008年   53篇
  2007年   51篇
  2006年   31篇
  2005年   27篇
  2004年   17篇
  2003年   28篇
  2002年   24篇
  2001年   12篇
  2000年   7篇
  1999年   87篇
  1998年   653篇
  1997年   383篇
  1996年   250篇
  1995年   148篇
  1994年   134篇
  1993年   156篇
  1992年   28篇
  1991年   29篇
  1990年   25篇
  1989年   26篇
  1988年   35篇
  1987年   32篇
  1986年   21篇
  1985年   21篇
  1984年   6篇
  1983年   6篇
  1982年   15篇
  1981年   16篇
  1980年   26篇
  1978年   6篇
  1977年   66篇
  1976年   116篇
  1975年   10篇
  1972年   3篇
排序方式: 共有3363条查询结果,搜索用时 15 毫秒
81.
文章简述了积石峡水电站溢洪道堰闸段锚索测力计的设计布置、率定检验、安装方法及张拉过程中对测力计的观测等方面,以加强锚索测力计在锚索张拉施工过程中的质量控制。  相似文献   
82.
Nakajima N  Saleh BE 《Applied optics》1995,34(11):1848-1858
We consider the reconstruction of a complex-valued object that is coherently illuminated and viewed through the same random-phase screen. The reconstruction is based on two intensity measurements: the intensity of the Fourier transform of the image and the intensity of the Fourier transform of the image when modulated with an exponential filter. The illumination beam has a Gaussian intensity profile of arbitrary width, and the phase screen is assumed to be described by a Gaussian random process of large variance and arbitrary correlation length. Computer-simulated examples of the reconstruction of a two-dimensional complex object demonstrate that the reconstruction is robust.  相似文献   
83.
The world of information technology is more than ever being flooded with huge amounts of data, nearly 2.5 quintillion bytes every day. This large stream of data is called big data, and the amount is increasing each day. This research uses a technique called sampling, which selects a representative subset of the data points, manipulates and analyzes this subset to identify patterns and trends in the larger dataset being examined, and finally, creates models. Sampling uses a small proportion of the original data for analysis and model training, so that it is relatively faster while maintaining data integrity and achieving accurate results. Two deep neural networks, AlexNet and DenseNet, were used in this research to test two sampling techniques, namely sampling with replacement and reservoir sampling. The dataset used for this research was divided into three classes: acceptable, flagged as easy, and flagged as hard. The base models were trained with the whole dataset, whereas the other models were trained on 50% of the original dataset. There were four combinations of model and sampling technique. The F-measure for the AlexNet model was 0.807 while that for the DenseNet model was 0.808. Combination 1 was the AlexNet model and sampling with replacement, achieving an average F-measure of 0.8852. Combination 3 was the AlexNet model and reservoir sampling. It had an average F-measure of 0.8545. Combination 2 was the DenseNet model and sampling with replacement, achieving an average F-measure of 0.8017. Finally, combination 4 was the DenseNet model and reservoir sampling. It had an average F-measure of 0.8111. Overall, we conclude that both models trained on a sampled dataset gave equal or better results compared to the base models, which used the whole dataset.  相似文献   
84.
This paper deals with defining the concept of agent-based time delay margin and computing its value in multi-agent systems controlled by event-triggered based controllers. The agent-based time delay margin specifying the time delay tolerance of each agent for ensuring consensus in event-triggered controlled multi-agent systems can be considered as complementary for the concept of (network) time delay margin, which has been previously introduced in some literature. In this paper, an event-triggered control method for achieving consensus in multi-agent systems with time delay is considered. It is shown that the Zeno behavior is excluded by applying this method. Then, in a multi-agent system controlled by the considered event-triggered method, the concept of agent-based time delay margin in the presence of a fixed network delay is defined. Moreover, an algorithm for computing the value of the time delay margin for each agent is proposed. Numerical simulation results are also provided to verify the obtained theoretical results.  相似文献   
85.

We perceive big data with massive datasets of complex and variegated structures in the modern era. Such attributes formulate hindrances while analyzing and storing the data to generate apt aftermaths. Privacy and security are the colossal perturb in the domain space of extensive data analysis. In this paper, our foremost priority is the computing technologies that focus on big data, IoT (Internet of Things), Cloud Computing, Blockchain, and fog computing. Among these, Cloud Computing follows the role of providing on-demand services to their customers by optimizing the cost factor. AWS, Azure, Google Cloud are the major cloud providers today. Fog computing offers new insights into the extension of cloud computing systems by procuring services to the edges of the network. In collaboration with multiple technologies, the Internet of Things takes this into effect, which solves the labyrinth of dealing with advanced services considering its significance in varied application domains. The Blockchain is a dataset that entertains many applications ranging from the fields of crypto-currency to smart contracts. The prospect of this research paper is to present the critical analysis and review it under the umbrella of existing extensive data systems. In this paper, we attend to critics' reviews and address the existing threats to the security of extensive data systems. Moreover, we scrutinize the security attacks on computing systems based upon Cloud, Blockchain, IoT, and fog. This paper lucidly illustrates the different threat behaviour and their impacts on complementary computational technologies. The authors have mooted a precise analysis of cloud-based technologies and discussed their defense mechanism and the security issues of mobile healthcare.

  相似文献   
86.
Wireless Networks - Inter-satellite data transmission links are very crucial for providing global inter-connectivity. We report designing and investigations on high date rate inter-satellite...  相似文献   
87.
The Journal of Supercomputing - Power consumption is likely to remain a significant concern for exascale performance in the foreseeable future. In addition, graphics processing units (GPUs) have...  相似文献   
88.

The World Wide Web(WWW) comprises a wide range of information, and it is mainly operated on the principles of keyword matching which often reduces accurate information retrieval. Automatic query expansion is one of the primary methods for information retrieval, and it handles the vocabulary mismatch problem often faced by the information retrieval systems to retrieve an appropriate document using the keywords. This paper proposed a novel approach of hybrid COOT-based Cat and Mouse Optimization (CMO) algorithm named as hybrid COOT-CMO for the appropriate selection of optimal candidate terms in the automatic query expansion process. To improve the accuracy of the Cat and Mouse Optimization (CMO) algorithm, the parameters are tuned with the help of the Coot algorithm. The best suitable expanded query is identified from the available expanded query sets also known as candidate query pools. All feasible combinations in this candidate query pool should be obtained from the top retrieved documents. Benchmark datasets such as the GOV2 Test Collection, the Cranfield Collections, and the NTCIR Test Collection are utilized to assess the performance of the proposed hybrid COOT-CMO method for automatic query expansion. This proposed method surpasses the existing state-of-the-art techniques using many performance measures such as F-score, precision, and mean average precision (MAP).

  相似文献   
89.
While the internet has a lot of positive impact on society, there are negative components. Accessible to everyone through online platforms, pornography is, inducing psychological and health related issues among people of all ages. While a difficult task, detecting pornography can be the important step in determining the porn and adult content in a video. In this paper, an architecture is proposed which yielded high scores for both training and testing. This dataset was produced from 190 videos, yielding more than 19 h of videos. The main sources for the content were from YouTube, movies, torrent, and websites that hosts both pornographic and non-pornographic contents. The videos were from different ethnicities and skin color which ensures the models can detect any kind of video. A VGG16, Inception V3 and Resnet 50 models were initially trained to detect these pornographic images but failed to achieve a high testing accuracy with accuracies of 0.49, 0.49 and 0.78 respectively. Finally, utilizing transfer learning, a convolutional neural network was designed and yielded an accuracy of 0.98.  相似文献   
90.
Data available in software engineering for many applications contains variability and it is not possible to say which variable helps in the process of the prediction. Most of the work present in software defect prediction is focused on the selection of best prediction techniques. For this purpose, deep learning and ensemble models have shown promising results. In contrast, there are very few researches that deals with cleaning the training data and selection of best parameter values from the data. Sometimes data available for training the models have high variability and this variability may cause a decrease in model accuracy. To deal with this problem we used the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) for selection of the best variables to train the model. A simple ANN model with one input, one output and two hidden layers was used for the training instead of a very deep and complex model. AIC and BIC values are calculated and combination for minimum AIC and BIC values to be selected for the best model. At first, variables were narrowed down to a smaller number using correlation values. Then subsets for all the possible variable combinations were formed. In the end, an artificial neural network (ANN) model was trained for each subset and the best model was selected on the basis of the smallest AIC and BIC value. It was found that combination of only two variables’ ns and entropy are best for software defect prediction as it gives minimum AIC and BIC values. While, nm and npt is the worst combination and gives maximum AIC and BIC values.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号