首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   763篇
  免费   44篇
  国内免费   9篇
电工技术   17篇
综合类   3篇
化学工业   212篇
金属工艺   22篇
机械仪表   24篇
建筑科学   36篇
矿业工程   1篇
能源动力   55篇
轻工业   48篇
水利工程   6篇
石油天然气   4篇
无线电   65篇
一般工业技术   142篇
冶金工业   24篇
原子能技术   10篇
自动化技术   147篇
  2024年   5篇
  2023年   13篇
  2022年   33篇
  2021年   56篇
  2020年   61篇
  2019年   42篇
  2018年   86篇
  2017年   47篇
  2016年   45篇
  2015年   28篇
  2014年   56篇
  2013年   97篇
  2012年   58篇
  2011年   61篇
  2010年   41篇
  2009年   32篇
  2008年   9篇
  2007年   13篇
  2006年   11篇
  2005年   4篇
  2004年   4篇
  2003年   4篇
  2002年   1篇
  1999年   2篇
  1994年   2篇
  1992年   2篇
  1991年   1篇
  1987年   1篇
  1979年   1篇
排序方式: 共有816条查询结果,搜索用时 15 毫秒
71.
Mapping quality of the self-organising maps (SOMs) is sensitive to the map topology and initialisation of neurons. In this article, in order to improve the convergence of the SOM, an algorithm based on split and merge of clusters to initialise neurons is introduced. The initialisation algorithm speeds up the learning process in large high-dimensional data sets. We also develop a topology based on this initialisation to optimise the vector quantisation error and topology preservation of the SOMs. Such an approach allows to find more accurate data visualisation and consequently clustering problem. The numerical results on eight small-to-large real-world data sets are reported to demonstrate the performance of the proposed algorithm in the sense of vector quantisation, topology preservation and CPU time requirement.  相似文献   
72.
We propose novel techniques to find the optimal achieve the maximum loss reduction for distribution networks location, size, and power factor of distributed generation (DG) to Determining the optimal DG location and size is achieved simultaneously using the energy loss curves technique for a pre-selected power factor that gives the best DG operation. Based on the network's total load demand, four DG sizes are selected. They are used to form energy loss curves for each bus and then for determining the optimal DG options. The study shows that by defining the energy loss minimization as the objective function, the time-varying load demand significantly affects the sizing of DG resources in distribution networks, whereas consideration of power loss as the objective function leads to inconsistent interpretation of loss reduction and other calculations. The devised technique was tested on two test distribution systems of varying size and complexity and validated by comparison with the exhaustive iterative method (EIM) and recently published results. Results showed that the proposed technique can provide an optimal solution with less computation.  相似文献   
73.
In our previous work, “robust transmission of scalable video stream using modified LT codes”, an LT code with unequal packet protection property was proposed. It was seen that applying the proposed code to any importance-sorted input data, could increase the probability of early decoding of the most important parts when enough number of encoded symbols is available at the decoder’s side. In this work, the performance of the proposed method is assessed in general case for a wide range of loss rate, even when there are not enough encoded symbols at the decoder’s side. Also in this work the degree distribution of input nodes is investigated in more detail. It is illustrated that sorting input nodes in encoding graph, as what we have done in our work, has superior advantage in comparison with unequal input node selection method that is used in traditional rateless code with unequal error protection property.  相似文献   
74.
Cops and Robbers is a pursuit and evasion game played on graphs that has received much attention. We consider an extension of Cops and Robbers, distance k Cops and Robbers, where the cops win if at least one of them is of distance at most k from the robber in G. The cop number of a graph G is the minimum number of cops needed to capture the robber in G. The distance k analogue of the cop number, written ck(G), equals the minimum number of cops needed to win at a given distance k. We study the parameter ck from algorithmic, structural, and probabilistic perspectives. We supply a classification result for graphs with bounded ck(G) values and develop an O(n2s+3) algorithm for determining if ck(G)≤s for s fixed. We prove that if s is not fixed, then computing ck(G) is NP-hard. Upper and lower bounds are found for ck(G) in terms of the order of G. We prove that
  相似文献   
75.
This paper proposes a new adaptive nonlinear model predictive control (NMPC) methodology for a class of hybrid systems with mixed inputs. For this purpose, an online fuzzy identification approach is presented to recursively estimate an evolving Takagi–Sugeno (eTS) model for the hybrid systems based on a potential clustering scheme. A receding horizon adaptive NMPC is then devised on the basis of the online identified eTS fuzzy model. The nonlinear MPC optimization problem is solved by a genetic algorithm (GA). Diverse sets of test scenarios have been conducted to comparatively demonstrate the robust performance of the proposed adaptive NMPC methodology on the challenging start-up operation of a hybrid continuous stirred tank reactor (CSTR) benchmark problem.  相似文献   
76.
Parallel machines are extensively used to increase computational speed in solving different scientific problems. Various topologies with different properties have been proposed so far and each one is suitable for specific applications. Pyramid interconnection networks have potentially powerful architecture for many applications such as image processing, visualization, and data mining. The major advantage of pyramids which is important for image processing systems is hierarchical abstracting and transferring the data toward the apex node, just like the human being vision system, which reach to an object from an image. There are rapidly growing applications in which the multidimensional datasets should be processed simultaneously. For such a system, we need a symmetric and expandable interconnection network to process data from different directions and forward them toward the apex. In this paper, a new type of pyramid interconnection network called Non-Flat Surface Level (NFSL) pyramid is proposed. NFSL pyramid interconnection networks constructed by L-level A-lateral-base pyramids that are named basic-pyramids. So, the apex node is surrounded by the level-one surfaces of NFSL that are the first nearest level of nodes to apex in the basic pyramids. Two topologies which are called NFSL-T and NFSL-Q originated from Trilateral-base and Quadrilateral-base basic-pyramids are studied to exemplify the proposed structure. To evaluate the proposed architecture, the most important properties of the networks are determined and compared with those of the standard pyramid networks and its variants.  相似文献   
77.
The purpose of this study is to provide a method for classifying non-fatigued vs. fatigued states following manual material handling. A method of template matching pattern recognition for feature extraction ($1 Recognizer) along with the support vector machine model for classification were applied on the kinematics of gait cycles segmented by our stepwise search-based segmentation algorithm. A single inertial measurement unit on the ankle was used, providing a minimally intrusive and inexpensive tool for monitoring. The classifier distinguished between states using distance-based scores from the recogniser and the step duration. The results of fatigue detection showed an accuracy of 90% across data from 20 recruited subjects. This method utilises the minimum amount of data and features from only one low-cost sensor to reliably classify the state of fatigue induced by a realistic manufacturing task using a simple machine learning algorithm that can be extended to real-time fatigue monitoring as a future technology to be employed in the manufacturing facilities.

Practitioner Summary: We examined the use of a wearable sensor for the detection of fatigue-related changes in gait based on a simulated manual material handling task. Classification based on foot acceleration and position trajectories resulted in 90% accuracy. This method provides a practical framework for predicting realistic levels of fatigue.  相似文献   

78.
The security of cryptographic systems is a major concern for cryptosystem designers, even though cryptography algorithms have been improved. Side-channel attacks, by taking advantage of physical vulnerabilities of cryptosystems, aim to gain secret information. Several approaches have been proposed to analyze side-channel information, among which machine learning is known as a promising method. Machine learning in terms of neural networks learns the signature (power consumption and electromagnetic emission) of an instruction, and then recognizes it automatically. In this paper, a novel experimental investigation was conducted on field-programmable gate array (FPGA) implementation of elliptic curve cryptography (ECC), to explore the efficiency of side-channel information characterization based on a learning vector quantization (LVQ) neural network. The main characteristics of LVQ as a multi-class classifier are that it has the ability to learn complex non-linear input-output relationships, use sequential training procedures, and adapt to the data. Experimental results show the performance of multi-class classification based on LVQ as a powerful and promising approach of side-channel data characterization.  相似文献   
79.
Brain–computer interfaces (BCIs) are recent developments in alternative technologies of user interaction. The purpose of this paper is to explore the potential of BCIs as user interfaces for CAD systems. The paper describes experiments and algorithms that use the BCI to distinguish between primitive shapes that are imagined by a user. Users wear an electroencephalogram (EEG) headset and imagine the shape of a cube, sphere, cylinder, pyramid or a cone. The EEG headset collects brain activity from 14 locations on the scalp. The data is analyzed with independent component analysis (ICA) and the Hilbert–Huang Transform (HHT). The features of interest are the marginal spectra of different frequency bands (theta, alpha, beta and gamma bands) calculated from the Hilbert spectrum of each independent component. The Mann–Whitney U-test is then applied to rank the EEG electrode channels by relevance in five pair-wise classifications. The features from the highest ranking independent components form the final feature vector which is then used to train a linear discriminant classifier. Results show that this classifier can discriminate between the five basic primitive objects with an average accuracy of about 44.6% (compared to naïve classification rate of 20%) over ten subjects (accuracy range of 36%–54%). The accuracy classification changes to 39.9% when both visual and verbal cues are used. The repeatability of the feature extraction and classification was checked by conducting the experiment on 10 different days with the same participants. This shows that the BCI holds promise in creating geometric shapes in CAD systems and could be used as a novel means of user interaction.  相似文献   
80.
Time-based Software Transactional Memory (STM) exploits a global clock to validate transactional data and guarantee consistency of transactions. While this method is simple to implement it results in contentions over the clock if transactions commit simultaneously. The alternative method is thread local clock (TLC) which exploits local variables to maintain consistency of transactions. However, TLC may increase false aborts and degrade performance of STMs. In this paper, we analyze global clock and TLC in the context of STM systems, highlighting both the implementation trade-offs and the performance implications of the two techniques. We demonstrate that neither global clock nor TLC is optimum across applications. To counter this challenge, we introduce two optimization techniques: The first optimization technique is Adaptive Clock (AC) which dynamically selects one of the two validation techniques based on probability of conflicts. AC is a speculative approach and relies on software O-GEHL predictors to speculate future conflicts. The second optimization technique is AC+ which reduces timing overhead of O-GEHL predictors by implementing the predictors in hardware. In addition, we exploit information theory to eliminate unnecessary computational resources and reduce storage requirements of the O-GEHL predictors. Our evaluation with TL2 and Stamp benchmark suite reveals that AC is effective and improves execution time of transactional applications up to 65%.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号