首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5358篇
  免费   434篇
  国内免费   55篇
电工技术   62篇
综合类   21篇
化学工业   1243篇
金属工艺   73篇
机械仪表   261篇
建筑科学   99篇
矿业工程   8篇
能源动力   326篇
轻工业   705篇
水利工程   66篇
石油天然气   30篇
武器工业   1篇
无线电   686篇
一般工业技术   1206篇
冶金工业   80篇
原子能技术   51篇
自动化技术   929篇
  2024年   36篇
  2023年   205篇
  2022年   478篇
  2021年   746篇
  2020年   453篇
  2019年   510篇
  2018年   465篇
  2017年   386篇
  2016年   385篇
  2015年   234篇
  2014年   284篇
  2013年   381篇
  2012年   227篇
  2011年   282篇
  2010年   159篇
  2009年   135篇
  2008年   89篇
  2007年   86篇
  2006年   36篇
  2005年   26篇
  2004年   36篇
  2003年   26篇
  2002年   16篇
  2001年   7篇
  2000年   10篇
  1999年   13篇
  1998年   17篇
  1997年   9篇
  1996年   11篇
  1995年   16篇
  1994年   7篇
  1993年   10篇
  1992年   8篇
  1991年   7篇
  1990年   1篇
  1989年   5篇
  1988年   4篇
  1987年   7篇
  1986年   3篇
  1985年   7篇
  1984年   5篇
  1983年   2篇
  1982年   4篇
  1981年   3篇
  1980年   1篇
  1979年   3篇
  1978年   2篇
  1977年   3篇
  1961年   1篇
排序方式: 共有5847条查询结果,搜索用时 15 毫秒
61.
With the recent developments in the Internet of Things (IoT), the amount of data collected has expanded tremendously, resulting in a higher demand for data storage, computational capacity, and real-time processing capabilities. Cloud computing has traditionally played an important role in establishing IoT. However, fog computing has recently emerged as a new field complementing cloud computing due to its enhanced mobility, location awareness, heterogeneity, scalability, low latency, and geographic distribution. However, IoT networks are vulnerable to unwanted assaults because of their open and shared nature. As a result, various fog computing-based security models that protect IoT networks have been developed. A distributed architecture based on an intrusion detection system (IDS) ensures that a dynamic, scalable IoT environment with the ability to disperse centralized tasks to local fog nodes and which successfully detects advanced malicious threats is available. In this study, we examined the time-related aspects of network traffic data. We presented an intrusion detection model based on a two-layered bidirectional long short-term memory (Bi-LSTM) with an attention mechanism for traffic data classification verified on the UNSW-NB15 benchmark dataset. We showed that the suggested model outperformed numerous leading-edge Network IDS that used machine learning models in terms of accuracy, precision, recall and F1 score.  相似文献   
62.
The demand for cloud computing has increased manifold in the recent past. More specifically, on-demand computing has seen a rapid rise as organizations rely mostly on cloud service providers for their day-to-day computing needs. The cloud service provider fulfills different user requirements using virtualization - where a single physical machine can host multiple Virtual Machines. Each virtual machine potentially represents a different user environment such as operating system, programming environment, and applications. However, these cloud services use a large amount of electrical energy and produce greenhouse gases. To reduce the electricity cost and greenhouse gases, energy efficient algorithms must be designed. One specific area where energy efficient algorithms are required is virtual machine consolidation. With virtual machine consolidation, the objective is to utilize the minimum possible number of hosts to accommodate the required virtual machines, keeping in mind the service level agreement requirements. This research work formulates the virtual machine migration as an online problem and develops optimal offline and online algorithms for the single host virtual machine migration problem under a service level agreement constraint for an over-utilized host. The online algorithm is analyzed using a competitive analysis approach. In addition, an experimental analysis of the proposed algorithm on real-world data is conducted to showcase the improved performance of the proposed algorithm against the benchmark algorithms. Our proposed online algorithm consumed 25% less energy and performed 43% fewer migrations than the benchmark algorithms.  相似文献   
63.
Identifying fruit disease manually is time-consuming, expert-required, and expensive; thus, a computer-based automated system is widely required. Fruit diseases affect not only the quality but also the quantity. As a result, it is possible to detect the disease early on and cure the fruits using computer-based techniques. However, computer-based methods face several challenges, including low contrast, a lack of dataset for training a model, and inappropriate feature extraction for final classification. In this paper, we proposed an automated framework for detecting apple fruit leaf diseases using CNN and a hybrid optimization algorithm. Data augmentation is performed initially to balance the selected apple dataset. After that, two pre-trained deep models are fine-tuning and trained using transfer learning. Then, a fusion technique is proposed named Parallel Correlation Threshold (PCT). The fused feature vector is optimized in the next step using a hybrid optimization algorithm. The selected features are finally classified using machine learning algorithms. Four different experiments have been carried out on the augmented Plant Village dataset and yielded the best accuracy of 99.8%. The accuracy of the proposed framework is also compared to that of several neural nets, and it outperforms them all.  相似文献   
64.
Continuous improvements in very-large-scale integration (VLSI) technology and design software have significantly broadened the scope of digital signal processing (DSP) applications. The use of application-specific integrated circuits (ASICs) and programmable digital signal processors for many DSP applications have changed, even though new system implementations based on reconfigurable computing are becoming more complex. Adaptable platforms that combine hardware and software programmability efficiency are rapidly maturing with discrete wavelet transformation (DWT) and sophisticated computerized design techniques, which are much needed in today’s modern world. New research and commercial efforts to sustain power optimization, cost savings, and improved runtime effectiveness have been initiated as initial reconfigurable technologies have emerged. Hence, in this paper, it is proposed that the DWT method can be implemented on a field-programmable gate array in a digital architecture (FPGA-DA). We examined the effects of quantization on DWT performance in classification problems to demonstrate its reliability concerning fixed-point math implementations. The Advanced Encryption Standard (AES) algorithm for DWT learning used in this architecture is less responsive to resampling errors than the previously proposed solution in the literature using the artificial neural networks (ANN) method. By reducing hardware area by 57%, the proposed system has a higher throughput rate of 88.72%, reliability analysis of 95.5% compared to the other standard methods.  相似文献   
65.
Despite the planned installation and operations of the traditional IEEE 802.11 networks, they still experience degraded performance due to the number of inefficiencies. One of the main reasons is the received signal strength indicator (RSSI) association problem, in which the user remains connected to the access point (AP) unless the RSSI becomes too weak. In this paper, we propose a multi-criterion association (WiMA) scheme based on software defined networking (SDN) in Wi-Fi networks. An association solution based on multi-criterion such as AP load, RSSI, and channel occupancy is proposed to satisfy the quality of service (QoS). SDN having an overall view of the network takes the association and reassociation decisions making the handoffs smooth in throughput performance. To implement WiMA extensive simulations runs are carried out on Mininet-NS3-Wi-Fi network simulator. The performance evaluation shows that the WiMA significantly reduces the average number of retransmissions by 5%–30% and enhances the throughput by 20%–50%, hence maintaining user fairness and accommodating more wireless devices and traffic load in the network, when compared to traditional client-driven (CD) approach and state of the art Wi-Balance approach.  相似文献   
66.
One of the most pressing concerns for the consumer market is the detection of adulteration in meat products due to their preciousness. The rapid and accurate identification mechanism for lard adulteration in meat products is highly necessary, for developing a mechanism trusted by consumers and that can be used to make a definitive diagnosis. Fourier Transform Infrared Spectroscopy (FTIR) is used in this work to identify lard adulteration in cow, lamb, and chicken samples. A simplified extraction method was implied to obtain the lipids from pure and adulterated meat. Adulterated samples were obtained by mixing lard with chicken, lamb, and beef with different concentrations (10%–50% v/v). Principal component analysis (PCA) and partial least square (PLS) were used to develop a calibration model at 800–3500 cm−1. Three-dimension PCA was successfully used by dividing the spectrum in three regions to classify lard meat adulteration in chicken, lamb, and beef samples. The corresponding FTIR peaks for the lard have been observed at 1159.6, 1743.4, 2853.1, and 2922.5 cm−1, which differentiate chicken, lamb, and beef samples. The wavenumbers offer the highest determination coefficient R2 value of 0.846 and lowest root mean square error of calibration (RMSEC) and root mean square error prediction (RMSEP) with an accuracy of 84.6%. Even the tiniest fat adulteration up to 10% can be reliably discovered using this methodology.  相似文献   
67.
Link relative-based approach was used in an article (see reference 1) to enhance the performance of the cumulative sum (CUSUM) control chart. This technique involves the use of firstly, the link relative variable to convert the process observations in a relative to the mean form and secondly, optimal constants to define a new variable which is used as the plotting statistic of the link relative CUSUM chart. In this article, it is proven through simulation study that the optimal constants with fixed values, as reported in the aforementioned article, give different results. Instead, if the regression technique is used, then the same results will be obtained.  相似文献   
68.
The alkylamines and their related boron derivatives demonstrated potent cytotoxicity against the growth of murine and human tissue cultured cells. These agents did not necessarily require the boron atom to possess potent cytotoxic action in certain tumor lines. Their ability to suppress tumor cell growth was based on their inhibition of DNA and protein syntheses. DNA synthesis was reduced because purine synthesis was blocked at the enzyme site of IMP dehydrogenase by the agents. In addition ribonucleotide reductase and nucleoside kinase activities were reduced by the agents which would account for the reduced d[NTP] pools. The DNA template or molecule may be a target of the drugs with regard to binding of the drug to nucleoside bases or intercalaction of the drug between DNA base pairs. Only some Of the agents caused DNA fragmentation with reduced DNA viscosity. These effects would contribute to overall cell death afforded by the agents.  相似文献   
69.
Coordinated controller tuning of the boiler turbine unit is a challenging task due to the nonlinear and coupling characteristics of the system. In this paper, a new variant of binary particle swarm optimization (PSO) algorithm, called probability based binary PSO (PBPSO), is presented to tune the parameters of a coordinated controller. The simulation results show that PBPSO can effectively optimize the control parameters and achieves better control performance than those based on standard discrete binary PSO, modified binary PSO, and standard continuous PSO.  相似文献   
70.
Robots have played an important role in the automation of computer aided manufacturing. The classical robot control implementation involves an expensive key step of model-based programming. An intuitive way to reduce this expensive exercise is to replace programming with machine learning of robot actions from demonstration where a (learner) robot learns an action by observing a demonstrator robot performing the same. To achieve this learning from demonstration (LFD) different machine learning techniques such as Artificial Neural Networks (ANN), Genetic Algorithms, Hidden Markov Models, Support Vector Machines, etc. can be used. This piece of work focuses exclusively on ANNs. Since ANNs have many standard architectural variations divided into two basic computational categories namely the recurrent networks and feed-forward networks, representative networks from each have been selected for study, i.e. Feed Forward Multilayer Perceptron (FF) network for feed-forward networks category and Elman (EL), and Nonlinear Autoregressive Exogenous Model (NARX) networks for the recurrent networks category. The main objective of this work is to identify the most suitable neural architecture for application of LFD in learning different robot actions. The sensor and actuator streams of demonstrated action are used as training data for ANN learning. Consequently, the learning capability is measured by comparing the error between demonstrator and corresponding learner streams. To achieve fairness in comparison three steps have been taken. First, Dynamic Time Warping is used to measure the error between demonstrator and learner streams, which gives resilience against translation in time. Second, comparison statistics are drawn between the best, instead of weight-equal, configurations of competing architectures so that learning capability of any architecture is not forced handicap. Third, each configuration's error is calculated as the average of ten trials of all possible learning sequences with random weight initialization so that the error value is independent of a particular sequence of learning or a particular set of initial weights. Six experiments are conducted to get a performance pattern of each architecture. In each experiment, a total of nine different robot actions were tested. Error statistics thus obtained have shown that NARX architecture is most suitable for this learning problem whereas Elman architecture has shown the worst suitability. Interestingly the computationally lesser MLP gives much lower and slightly higher error statistics compared to the computationally superior Elman and NARX neural architectures, respectively.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号