首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   115篇
  免费   7篇
化学工业   6篇
能源动力   2篇
轻工业   51篇
无线电   21篇
一般工业技术   8篇
冶金工业   11篇
自动化技术   23篇
  2024年   1篇
  2022年   1篇
  2021年   2篇
  2020年   1篇
  2019年   4篇
  2018年   6篇
  2017年   5篇
  2016年   3篇
  2014年   2篇
  2013年   2篇
  2012年   10篇
  2011年   10篇
  2010年   4篇
  2008年   11篇
  2007年   9篇
  2006年   7篇
  2004年   2篇
  2003年   1篇
  2002年   2篇
  2001年   3篇
  1999年   7篇
  1998年   8篇
  1997年   1篇
  1996年   4篇
  1995年   9篇
  1994年   1篇
  1993年   1篇
  1992年   3篇
  1991年   1篇
  1990年   1篇
排序方式: 共有122条查询结果,搜索用时 31 毫秒
91.
Twenty four semi-hard cheeses produced during autumn (n = 12) and summer (n = 12) periods were manufactured and ripened at an industrial scale. Tryptophan and vitamin A fluorescence spectra were scanned on the 24 cheeses at 2, 30 and 60 days of ripening. Principal component analysis (PCA) and factorial discriminant analysis (FDA) were applied on the spectral data sets. The first five principal components (PCs) of the PCA extracted from each data set (tryptophan or vitamin A) of cheeses produced during autumn or summer period were pooled into a single matrix and analysed by FDA. Regarding cheeses produced during the autumn period, the percentage of samples correctly classified was 95.8% and 86.1% for the calibration and validation samples, respectively. Similar results were obtained from cheeses produced during the summer period. Finally, concatenation technique was applied to the tryptophan and vitamin A spectra recorded on cheeses independently of their production seasons. Correct classification was observed for 87.5% and 80.6% for the calibration and validation samples, respectively. Although this statistical technique did not allow 100% correct classification for all groups, the results obtained were promising considering the significant effect of the season on the cheese characteristics.  相似文献   
92.
93.
The present work aims to look into the contribution of the extended finite element method for large deformation of cracked bodies in plane strain approximation. The unavailability of sufficient mathematical tools and proofs for such problem makes the study exploratory. First, the asymptotic solution is presented. Then, a numerical analysis is realized to verify the pertinence of solution given by the asymptotic procedure, because it serves as an eXtended finite element method enrichment basis. Finally, a convergence study is carried out to show the contribution of the exploitation of such method. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
94.
Assessing the cheese-making properties (CMP) of milks with a rapid and cost-effective method is of particular interest for the Protected Designation of Origin cheese sector. The aims of this study were to evaluate the potential of mid-infrared (MIR) spectra to estimate coagulation and acidification properties, as well as curd yield (CY) traits of Montbéliarde cow milk. Samples from 250 cows were collected in 216 commercial herds in Franche-Comté with the objectives to maximize the genetic diversity as well as the variation in milk composition. All coagulation and CY traits showed high variability (10 to 43%). Reference analyses performed for soft (SC) and pressed cooked (PCC) cheese technology were matched with MIR spectra. Prediction models were built on 446 informative wavelengths not tainted by the water absorbance, using different approaches such as partial least squares (PLS), uninformative variable elimination PLS, random forest PLS, Bayes A, Bayes B, Bayes C, and Bayes RR. We assessed equation performances for a set of 20 CMP traits (coagulation: 5 for SC and 4 for PCC; acidification: 5 for SC and 3 for PCC; laboratory CY: 3) by comparing prediction accuracies based on cross-validation. Overall, variable selection before PLS did not significantly improve the performances of the PLS regression, the prediction differences between Bayesian methods were negligible, and PLS models always outperformed Bayesian models. This was likely a result of the prior use of informative wavelengths of the MIR spectra. The best accuracies were obtained for curd yields expressed in dry matter (CYDM) or fresh (CYFRESH) and for coagulation traits (curd firmness for PCC and SC) using the PLS regression. Prediction models of other CMP traits were moderately to poorly accurate. Whatever the prediction methodology, the best results were always obtained for CY traits, probably because these traits are closely related to milk composition. The CYDM predictions showed coefficient of determination (R2) values up to 0.92 and 0.87, and RSy,x values of 3 and 4% for PLS and Bayes regressions, respectively. Finally, we divided the data set into calibration (2/3) and validation (1/3) sets and developed prediction models in external validation using PLS regression only. In conclusion, we confirmed, in the validation set, an excellent prediction for CYDM [R2 = 0.91, ratio of performance to deviation (RPD) = 3.39] and a very good prediction for CYFRESH (R2 = 0.84, RPD = 2.49), adequate for analytical purposes. We also obtained good results for both PCC and SC curd firmness traits (R2 ≥ 0.70, RPD ≥1.8), which enable quantitative prediction.  相似文献   
95.
GEMNET is a generalization of shuffle-exchange networks and it can represent a family of network structures (including ShuffleNet and de Bruijn graph) for an arbitrary number of nodes. GEMNET employs a regular interconnection graph with highly desirable properties such as small nodal degree, simple routing, small diameter, and growth capability (viz. scalability). GEMNET can serve as a logical (virtual), packet-switched, multihop topology which can be employed for constructing the next generation of lightwave networks using wavelength-division multiplexing (WDM). Various properties of GEMNET are studied  相似文献   
96.
Dependency of the output voltage and frequency of the isolated self-excited induction generator on the speed, load, and terminal capacitance causes certain limitations on its performance. In this study, the performance of the induction generator under a wide range of operating conditions is examined. It is found that the machine operates only in certain element ranges and that all generated currents and voltages are bounded. It is also shown that a combination of these elements exists that is optimal for maximum power generation  相似文献   
97.
A statistical quantization model is used to analyze the effects of quantization when digital techniques are used to implement a real-valued feedforward multilayer neural network. In this process, a parameter called the effective nonlinearity coefficient, which is important in the studying of quantization effects, is introduced. General statistical formulations of the performance degradation of the neural network caused by quantization are developed as functions of the quantization parameters. The formulations predict that the network's performance degradation gets worse when the number of bits is decreased; that a change of the number of hidden units in a layer has no effect on the degradation; that for a constant effective nonlinearity coefficient and number of bits, an increase in the number of layers leads to worse performance degradation; and the number of bits in successive layers can be reduced if the neurons of the lower layer are nonlinear  相似文献   
98.
Network security management is a complex and costly task. This is due to the diversity and the large number of assets to protect from potential threats. It is difficult for enterprises to ensure complete security of their information technology resources. They need to give priority to critical and vulnerable assets. Thus, for each asset, they assess the risks associated with various threats. Then, depending on risk level, they can decide which asset needs a particular security treatment. In this paper, we propose a novel risk assessment framework based on a set of reversible metrics. It is based on new metrics for the likelihood and impact parameters. These metrics have as a primary objective to solve the problem of weighting the risk factors that lead to different risk values. The proposed metrics are classified and aggregated to provide a unique risk metric. We are using a new bitwise method for aggregating called ‘bit alternation’. This method ensures the reversibility of the likelihood and impact metrics. It has many advantages: unifying metrics, diagnosing the cause of high risks, comparing the values of the risk calculated with different weighting strategies, exchanging standard risk values, etc. To illustrate our method, we have applied it to assess risks of some distributed denial of service attacks for an e‐commerce enterprise that wants to see the level of security of its retail web server. To demonstrate the effectiveness of our results, we have compared them with those obtained by the weighted average method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
99.
We present a novel method for detecting circles on digital images. This transform is called the circlet transform and can be seen as an extension of classical 1D wavelets to 2D; each basic element is a circle convolved by a 1D oscillating function. In comparison with other circle-detector methods, mainly the Hough transform, the circlet transform takes into account the finite frequency aspect of the data; a circular shape is not restricted to a circle but has a certain width. The transform operates directly on image gradient and does not need further binary segmentation. The implementation is efficient as it consists of a few fast Fourier transforms. The circlet transform is coupled with a soft-thresholding process and applied to a series of real images from different fields: ophthalmology, astronomy and oceanography. The results show the effectiveness of the method to deal with real images with blurry edges.  相似文献   
100.
According to the amendment 5 of the IEEE 802.11 standard, 802.11n still uses the distributed coordination function (DCF) access method as mandatory function in access points and wireless stations (essentially to assure compatibility with previous 802.11 versions). This article provides an accurate two dimensional Markov chain model to investigate the throughput performance of IEEE 802.11n networks when frame aggregation and block acknowledgements (Block-ACK) schemes are adopted. Our proposed model considered packet loss either from collisions or channel errors. Further, it took anomalous slots and the freezing of backoff counter into account. The contribution of this work was the analysis of the DCF performance under error-prone channels considering both 802.11n MAC schemes and the anomalous slot in the backoff process. To validate the accuracy of our proposed model, we compared its mathematical simulation results with those obtained using the 802.11n DCF in the network simulator (NS-2) and with other analytical models investigating the performance of 802.11n DCF. Simulation results proved the accuracy of our model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号