首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   138篇
  免费   2篇
电工技术   9篇
化学工业   20篇
机械仪表   4篇
建筑科学   9篇
矿业工程   1篇
能源动力   5篇
轻工业   14篇
水利工程   1篇
无线电   33篇
一般工业技术   16篇
冶金工业   11篇
自动化技术   17篇
  2023年   2篇
  2022年   6篇
  2021年   6篇
  2020年   2篇
  2019年   4篇
  2017年   2篇
  2016年   2篇
  2015年   2篇
  2014年   8篇
  2013年   13篇
  2012年   9篇
  2011年   4篇
  2010年   3篇
  2009年   6篇
  2008年   6篇
  2007年   8篇
  2006年   4篇
  2005年   6篇
  2004年   2篇
  2003年   6篇
  2002年   6篇
  2000年   2篇
  1999年   3篇
  1998年   4篇
  1997年   3篇
  1996年   5篇
  1995年   3篇
  1994年   4篇
  1993年   4篇
  1992年   2篇
  1988年   1篇
  1987年   1篇
  1986年   1篇
排序方式: 共有140条查询结果,搜索用时 15 毫秒
1.
2.
The province of Guizhou in Southwestern China is currently one of the world's most important mercury production areas. Emissions of mercury from the province to the global atmosphere have been estimated to be approximately 12% of the world total anthropogenic emissions. The main objective of this study was to assess the level of contamination with Hg in two geographical areas of Guizhou province. Mercury pollution in the areas concerned originates from mercury mining and ore processing in the area of Wanshan, while in the area of Quingzhen mercury pollution originates from the chemical industry discharging Hg through wastewaters and emissions to the atmosphere due to coal burning for electricity production. The results of this study confirmed high contamination with Hg in soil, sediments and rice in the Hg mining area in Wanshan. High levels of Hg in soil and rice were also found in the vicinity of the chemical plant in Quingzhen. The concentrations of Hg decreased with distance from the main sources of pollution considerably. The general conclusion is that Hg contamination in Wanshan is geographically more widespread, due to deposition and scavenging of Hg from contaminated air and deposition on land. In Quingzhen Hg contamination of soil is very high close to the chemical plant but the levels reach background concentrations at a distance of several km. Even though the major source of Hg in both areas is inorganic Hg, it was observed that active transformation of inorganic Hg to organic Hg species (MeHg) takes place in water, sediments and soils. The concentration of Hg in rice grains can reach up to 569 microg/kg of total Hg of which 145 microg/kg was in MeHg form. The percentage of Hg as MeHg varied from 5 to 83%. The concentrations of selenium can reach up to 16 mg/kg in soil and up to 1 mg/g in rice. A correlation exists between the concentration of Se in soil and rice, indicating that a portion of Se is bioavailable to plants. No correlation between Hg and Se in rice was found. Exposure of the local population to Hg may occur due to inhalation of Hg present in air (in particular in Hg mining area) and consumption of Hg contaminated food (in particular rice and fish) and water. Comparison of intake through these different routes showed that the values of Hg considerably exceed the USA EPA Reference Concentration (RfC) for chronic Hg exposure (RfC is 0.0004 mg/m(3)) close to the emission sources. Intake of Hg through food consumption, particularly rice and fish, is also an important route of Hg exposure in study area. In general, it can be concluded that the population mostly at risk is located in the vicinity of smelting facilities, mining activities and close to the waste disposal sites in the wider area of Wanshan. In order to assess the real level of contamination in the local population, it is recommended that biomonitoring should be performed, including Hg and MeHg measurements in hair, blood and urine samples.  相似文献   
3.
This paper provides a new design of robust fault detection for turbofan engines with adaptive controllers. The critical issue is that the adaptive controllers can depress the faulty effects such that the actual system outputs remain the pre-specified values, making it difficult to detect faults/failures. To solve this problem, a Total Measurable Fault Information Residual (ToMFIR) technique with the aid of system transformation is adopted to detect faults in turbofan engines with adaptive controllers. This design is a ToMFIR-redundancy-based robust fault detection. The ToMFIR is first introduced and existing results are also summarized. The Detailed design process of the ToMFIRs is presented and a turbofan engine model is simulated to verify the effectiveness of the proposed ToMFIR-based fault-detection strategy.  相似文献   
4.
Quality control of the complete energy system is necessary if energy‐efficient solutions are to be met. To perform good building operation and quality control of a given system, it is necessary to have information about building systems and assessment tools. The paper presents Norwegian lifetime commissioning (LTC) procedures that are enabling follow‐up of the building performance during the building lifetime by establishing a generic framework on building performance data. Further, three developed assessment tools are presented: inspection algorithm for ventilation system, mass balance inspection algorithm for consumer substation, and advanced method for improved measurement of heat pump performance based on data fusion technique. The LTC procedures were tested on two case buildings. The results showed that 20% of all the defined building performance data can be monitored by BEMS. Using the mass balance inspection algorithm, it was found that fault in mass balance prevented implantation of desired temperature control for floor heating system. For heat pump performance, measurement of differential water temperature can be very erroneous. Hence, use of compressor electrical signal can give more precise data on heat pump performance. Comparative analysis showed that detailed monitoring system helps tracking energy use and fault detection in operation. Yearly and hourly profiles of energy consumption with separated use and energy carriers are given in the paper. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
5.
An empirical analysis is presented for researching linkages between manufacturing strategy, benchmarking, performance measurement (PM) and business process reengineering (BPR). Although the importance of these linkages has been described in conceptual literature, it has not been widely demonstrated empirically. The survey research was carried out in 73 medium and large-sized Slovenian manufacturing companies within the mechanical, electro-mechanical and electronic industries. The resulting data were subjected to reliability and validity analyses. Canonical correlation analysis was used to test six hypotheses.The results confirmed the need for a strategically-driven BPR approach and the positive impact of performance measurement on BPR performance.  相似文献   
6.
A new approach to the design of a digital algorithm for direct estimation of voltage phasor, frequency and its rate of change is presented. The algorithm derived is based on Newton's iterative method, very commonly used in the field of unconstrained optimization studies. A five-parameter voltage model was assumed, so the result of the estimation was a parameter vector, consisting of the following unknown parameters of the voltage signal processed: its DC component, magnitude, phase angle, frequency and its rate of change. To demonstrate the performance of the algorithm, off-line computer simulation results are presented. The algorithm showed high measurement accuracy over a wide range of frequency changes. The algorithm ‘order two convergence’ provided a very good dynamic response as well as a fast algorithm adaptability. The new algorithm seems to be a particularly useful tool in the field of frequency relaying as well as in various aspects of power engineering applications.  相似文献   
7.
We develop a procedure for the order selection of damped sinusoidal models based on the maximum a posteriori (MAP) criterion. The proposed method merges the concept of predictive densities with Bayesian inference to arrive at a complex multidimensional integral whose solution is achieved by way of the efficient Monte Carlo importance sampling technique. The importance function, a multivariate Cauchy probability density, is employed to produce stratified samples over the hypersurfaces support region. Centrality location parameters for the Cauchy are resolved by exploiting the special structure of the compressed likelihood function (CLF) and applying the fast maximum likelihood (FML) procedure of Umesh and Tufts. Simulation results allow for a comparison between our method and the singular value decomposition (SVD) based information theoretic criteria of Reddy and Biradar (see IEEE Trans. Signal Processing, vol.41, no.9, p.2872-81, 1993)  相似文献   
8.
This paper studies supervised clustering in the context of label ranking data. The goal is to partition the feature space into K clusters, such that they are compact in both the feature and label ranking space. This type of clustering has many potential applications. For example, in target marketing we might want to come up with K different offers or marketing strategies for our target audience. Thus, we aim at clustering the customers’ feature space into K clusters by leveraging the revealed or stated, potentially incomplete customer preferences over products, such that the preferences of customers within one cluster are more similar to each other than to those of customers in other clusters. We establish several baseline algorithms and propose two principled algorithms for supervised clustering. In the first baseline, the clusters are created in an unsupervised manner, followed by assigning a representative label ranking to each cluster. In the second baseline, the label ranking space is clustered first, followed by partitioning the feature space based on the central rankings. In the third baseline, clustering is applied on a new feature space consisting of both features and label rankings, followed by mapping back to the original feature and ranking space. The RankTree principled approach is based on a Ranking Tree algorithm previously proposed for label ranking prediction. Our modification starts with K random label rankings and iteratively splits the feature space to minimize the ranking loss, followed by re-calculation of the K rankings based on cluster assignments. The MM-PL approach is a multi-prototype supervised clustering algorithm based on the Plackett-Luce (PL) probabilistic ranking model. It represents each cluster with a union of Voronoi cells that are defined by a set of prototypes, and assign each cluster with a set of PL label scores that determine the cluster central ranking. Cluster membership and ranking prediction for a new instance are determined by cluster membership of its nearest prototype. The unknown cluster PL parameters and prototype positions are learned by minimizing the ranking loss, based on two variants of the expectation-maximization algorithm. Evaluation of the proposed algorithms was conducted on synthetic and real-life label ranking data by considering several measures of cluster goodness: (1) cluster compactness in feature space, (2) cluster compactness in label ranking space and (3) label ranking prediction loss. Experimental results demonstrate that the proposed MM-PL and RankTree models are superior to the baseline models. Further, MM-PL is has shown to be much better than other algorithms at handling situations with significant fraction of missing label preferences.  相似文献   
9.
Two numerical algorithms for fault location and distance protection which use data from one end of a transmission line are presented. Both algorithms require only current signals as input data. Voltage signals are unnecessary for determining the unknown distance to the fault. The solution for the most frequent phase to ground fault is presented. The algorithms are relatively simple and easy to be implemented in the on-line application. The algorithms allow for accurate calculation of the fault location irrespective of the fault resistance and load. To illustrate the features of the new algorithms, steady-state and dynamic tests are presented  相似文献   
10.
Perfect sampling: a review and applications to signal processing   总被引:7,自引:0,他引:7  
Markov chain Monte Carlo (MCMC) sampling methods have gained much popularity among researchers in signal processing. The Gibbs and the Metropolis-Hastings (1954, 1970) algorithms, which are the two most popular MCMC methods, have already been employed in resolving a wide variety of signal processing problems. A drawback of these algorithms is that in general, they cannot guarantee that the samples are drawn exactly from a target distribution. New Markov chain-based methods have been proposed, and they produce samples that are guaranteed to come from the desired distribution. They are referred to as perfect samplers. We review some of them, with the emphasis being given to the algorithm coupling from the past (CFTP). We also provide two signal processing examples where we apply perfect sampling. In the first, we use perfect sampling for restoration of binary images and, in the second, for multiuser detection of CDMA signals  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号