首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2489篇
  免费   236篇
  国内免费   24篇
电工技术   53篇
综合类   11篇
化学工业   739篇
金属工艺   83篇
机械仪表   129篇
建筑科学   108篇
矿业工程   6篇
能源动力   162篇
轻工业   303篇
水利工程   35篇
石油天然气   28篇
无线电   206篇
一般工业技术   409篇
冶金工业   64篇
原子能技术   24篇
自动化技术   389篇
  2024年   15篇
  2023年   50篇
  2022年   93篇
  2021年   195篇
  2020年   187篇
  2019年   198篇
  2018年   258篇
  2017年   203篇
  2016年   207篇
  2015年   110篇
  2014年   199篇
  2013年   298篇
  2012年   187篇
  2011年   179篇
  2010年   122篇
  2009年   93篇
  2008年   38篇
  2007年   27篇
  2006年   24篇
  2005年   12篇
  2004年   15篇
  2003年   9篇
  2002年   6篇
  2000年   2篇
  1999年   2篇
  1997年   1篇
  1996年   3篇
  1995年   3篇
  1994年   2篇
  1992年   3篇
  1991年   2篇
  1989年   2篇
  1987年   1篇
  1980年   1篇
  1979年   1篇
  1974年   1篇
排序方式: 共有2749条查询结果,搜索用时 17 毫秒
11.
The weighted principal component analysis technique is employed for reconstruction of reflectance spectra of surface colors from the related tristimulus values. A dynamic eigenvector subspace based on applying certain weights to reflectance data of Munsell color chips has been formed for each particular sample and the color difference value between the target, and Munsell dataset is chosen as a criterion for determination of weighting factors. Implementation of this method enables one to increase the influence of samples which are closer to target on extracted principal eigenvectors and subsequently diminish the effect of those samples which benefit from higher amount of color difference. The performance of the suggested method is evaluated in spectral reflectance reconstruction of three different collections of colored samples by the use of the first three Munsell bases. The resulting spectra show considerable improvements in terms of root mean square error between the actual and reconstructed reflectance curves as well as CIELAB color difference under illuminant A in comparison to those obtained from the standard PCA method. © 2008 Wiley Periodicals, Inc. Col Res Appl, 33, 360–371, 2008  相似文献   
12.

Ultra-high-performance concrete (UHPC) is a recent class of concrete with improved durability, rheological and mechanical and durability properties compared to traditional concrete. The production cost of UHPC is considerably high due to a large amount of cement used, and also the high price of other required constituents such as quartz powder, silica fume, fibres and superplasticisers. To achieve specific requirements such as desired production cost, strength and flowability, the proportions of UHPC’s constituents must be well adjusted. The traditional mixture design of concrete requires cumbersome, costly and extensive experimental program. Therefore, mathematical optimisation, design of experiments (DOE) and statistical mixture design (SMD) methods have been used in recent years, particularly for meeting multiple objectives. In traditional methods, simple regression models such as multiple linear regression models are used as objective functions according to the requirements. Once the model is constructed, mathematical programming and simplex algorithms are usually used to find optimal solutions. However, a more flexible procedure enabling the use of high accuracy nonlinear models and defining different scenarios for multi-objective mixture design is required, particularly when it comes to data which are not well structured to fit simple regression models such as multiple linear regression. This paper aims to demonstrate a procedure integrating machine learning (ML) algorithms such as Artificial Neural Networks (ANNs) and Gaussian Process Regression (GPR) to develop high-accuracy models, and a metaheuristic optimisation algorithm called Particle Swarm Optimisation (PSO) algorithm for multi-objective mixture design and optimisation of UHPC reinforced with steel fibers. A reliable experimental dataset is used to develop the models and to justify the final results. The comparison of the obtained results with the experimental results validates the capability of the proposed procedure for multi-objective mixture design and optimisation of steel fiber reinforced UHPC. The proposed procedure not only reduces the efforts in the experimental design of UHPC but also leads to the optimal mixtures when the designer faces strength-flowability-cost paradoxes.

  相似文献   
13.
International Journal of Computer Vision - Visual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance...  相似文献   
14.

Todays, XML as a de facto standard is used to broadcast data over mobile wireless networks. In these networks, mobile clients send their XML queries over a wireless broadcast channel and recieve their desired XML data from the channel. However, downloading the whole XML data by a mobile device is a challenge since the mobile devices used by clients are small battery powered devices with limited resources. To meet this challenge, the XML data should be indexed in such a way that the desired XML data can be found easily and only such data can be downloaded instead of the whole XML data by the mobile clients. Several indexing methods are proposed to selectively access the XML data over an XML stream. However, the existing indexing methods cause an increase in the size of XML stream by including some extra information over the XML stream. In this paper, a new XML stream structure is proposed to disseminate the XML data over a broadcast channel by grouping and summarizing the structural information of XML nodes. By summarizing such information, the size of XML stream can be reduced and therefore, the latency of retrieving the desired XML data over a wirless broadcast channel can be reduced. The proposed XML stream structure also contains indexes in order to skip from the irrelevant parts over the XML stream. It therefore can reduce the energy consumption of mobile devices in downloading the results of XML queries. In addition, our proposed XML stream structure can process different types of XML queries and experimental results showed that it improves the performace of XML query processing over the XML data stream compared to the existing research works in terms of access and tuning times.

  相似文献   
15.
Combining accurate neural networks (NN) in the ensemble with negative error correlation greatly improves the generalization ability. Mixture of experts (ME) is a popular combining method which employs special error function for the simultaneous training of NN experts to produce negatively correlated NN experts. Although ME can produce negatively correlated experts, it does not include a control parameter like negative correlation learning (NCL) method to adjust this parameter explicitly. In this study, an approach is proposed to introduce this advantage of NCL into the training algorithm of ME, i.e., mixture of negatively correlated experts (MNCE). In this proposed method, the capability of a control parameter for NCL is incorporated in the error function of ME, which enables its training algorithm to establish better balance in bias-variance-covariance trade-off and thus improves the generalization ability. The proposed hybrid ensemble method, MNCE, is compared with their constituent methods, ME and NCL, in solving several benchmark problems. The experimental results show that our proposed ensemble method significantly improves the performance over the original ensemble methods.  相似文献   
16.
In this article, we consider the project critical path problem in an environment with hybrid uncertainty. In this environment, the duration of activities are considered as random fuzzy variables that have probability and fuzzy natures, simultaneously. To obtain a robust critical path with this kind of uncertainty a chance constraints programming model is used. This model is converted to a deterministic model in two stages. In the first stage, the uncertain model is converted to a model with interval parameters by alpha-cut method and distribution function concepts. In the second stage, the interval model is converted to a deterministic model by robust optimization and min-max regret criterion and ultimately a genetic algorithm with a proposed exact algorithm are applied to solve the final model. Finally, some numerical examples are given to show the efficiency of the solution procedure.  相似文献   
17.
Electrospinning with a collector consisting of two pieces of electrically conductive substrates separated by a gap has been used to prepare uniaxially aligned PAN nanofibers. Solution of 15 wt % of PAN/DMF was used tentatively for electrospinning. The effects of width of the gap and applied voltage on degree of alignment were investigated using image‐processing technique by Fourier power spectrum method. The electrospinning conditions that gave the best alignment of nanofibers for 10–15 wt % solution concentrations were experimentally obtained. Bundles like multifilament yarns of uniaxially aligned nanofibers were prepared using a new simple method. After‐treatments of these bundles were carried out in boiling water under tension. A comparison was made between the crystallinity and mechanical behavior of posttreated and untreated bundles. © 2006 Wiley Periodicals, Inc. J Appl Polym Sci 101: 4350–4357, 2006  相似文献   
18.
Parallel machines are extensively used to increase computational speed in solving different scientific problems. Various topologies with different properties have been proposed so far and each one is suitable for specific applications. Pyramid interconnection networks have potentially powerful architecture for many applications such as image processing, visualization, and data mining. The major advantage of pyramids which is important for image processing systems is hierarchical abstracting and transferring the data toward the apex node, just like the human being vision system, which reach to an object from an image. There are rapidly growing applications in which the multidimensional datasets should be processed simultaneously. For such a system, we need a symmetric and expandable interconnection network to process data from different directions and forward them toward the apex. In this paper, a new type of pyramid interconnection network called Non-Flat Surface Level (NFSL) pyramid is proposed. NFSL pyramid interconnection networks constructed by L-level A-lateral-base pyramids that are named basic-pyramids. So, the apex node is surrounded by the level-one surfaces of NFSL that are the first nearest level of nodes to apex in the basic pyramids. Two topologies which are called NFSL-T and NFSL-Q originated from Trilateral-base and Quadrilateral-base basic-pyramids are studied to exemplify the proposed structure. To evaluate the proposed architecture, the most important properties of the networks are determined and compared with those of the standard pyramid networks and its variants.  相似文献   
19.
In this paper, a novel algorithm for image encryption based on hash function is proposed. In our algorithm, a 512-bit long external secret key is used as the input value of the salsa20 hash function. First of all, the hash function is modified to generate a key stream which is more suitable for image encryption. Then the final encryption key stream is produced by correlating the key stream and plaintext resulting in both key sensitivity and plaintext sensitivity. This scheme can achieve high sensitivity, high complexity, and high security through only two rounds of diffusion process. In the first round of diffusion process, an original image is partitioned horizontally to an array which consists of 1,024 sections of size 8 × 8. In the second round, the same operation is applied vertically to the transpose of the obtained array. The main idea of the algorithm is to use the average of image data for encryption. To encrypt each section, the average of other sections is employed. The algorithm uses different averages when encrypting different input images (even with the same sequence based on hash function). This, in turn, will significantly increase the resistance of the cryptosystem against known/chosen-plaintext and differential attacks. It is demonstrated that the 2D correlation coefficients (CC), peak signal-to-noise ratio (PSNR), encryption quality (EQ), entropy, mean absolute error (MAE) and decryption quality can satisfy security and performance requirements (CC <0.002177, PSNR <8.4642, EQ >204.8, entropy >7.9974 and MAE >79.35). The number of pixel change rate (NPCR) analysis has revealed that when only one pixel of the plain-image is modified, almost all of the cipher pixels will change (NPCR >99.6125 %) and the unified average changing intensity is high (UACI >33.458 %). Moreover, our proposed algorithm is very sensitive with respect to small changes (e.g., modification of only one bit) in the external secret key (NPCR >99.65 %, UACI >33.55 %). It is shown that this algorithm yields better security performance in comparison to the results obtained from other algorithms.  相似文献   
20.
Time-based Software Transactional Memory (STM) exploits a global clock to validate transactional data and guarantee consistency of transactions. While this method is simple to implement it results in contentions over the clock if transactions commit simultaneously. The alternative method is thread local clock (TLC) which exploits local variables to maintain consistency of transactions. However, TLC may increase false aborts and degrade performance of STMs. In this paper, we analyze global clock and TLC in the context of STM systems, highlighting both the implementation trade-offs and the performance implications of the two techniques. We demonstrate that neither global clock nor TLC is optimum across applications. To counter this challenge, we introduce two optimization techniques: The first optimization technique is Adaptive Clock (AC) which dynamically selects one of the two validation techniques based on probability of conflicts. AC is a speculative approach and relies on software O-GEHL predictors to speculate future conflicts. The second optimization technique is AC+ which reduces timing overhead of O-GEHL predictors by implementing the predictors in hardware. In addition, we exploit information theory to eliminate unnecessary computational resources and reduce storage requirements of the O-GEHL predictors. Our evaluation with TL2 and Stamp benchmark suite reveals that AC is effective and improves execution time of transactional applications up to 65%.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号