首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   52篇
  免费   0篇
机械仪表   1篇
无线电   18篇
一般工业技术   13篇
冶金工业   2篇
自动化技术   18篇
  2018年   1篇
  2017年   1篇
  2016年   1篇
  2014年   1篇
  2013年   1篇
  2012年   1篇
  2011年   1篇
  2010年   2篇
  2009年   3篇
  2008年   1篇
  2007年   1篇
  2006年   2篇
  2005年   2篇
  2004年   1篇
  2002年   5篇
  2001年   3篇
  2000年   1篇
  1999年   2篇
  1998年   4篇
  1997年   4篇
  1996年   3篇
  1995年   1篇
  1994年   5篇
  1992年   3篇
  1991年   1篇
  1990年   1篇
排序方式: 共有52条查询结果,搜索用时 31 毫秒
11.
The opportunities of open data have been recently recognized among companies in different domains. Digital service providers have increasingly been interested in the possibilities of innovating new ideas and services around open data. Digital service ecosystems provide several advantages for service developers, enabling the service co-innovation and co-creation among ecosystem members utilizing and sharing common assets and knowledge. The utilization of open data in digital services requires new innovation practices, service development models, and a collaboration environment. These can be provided by the ecosystem. However, since open data can be almost anything and originate from different kinds of data sources, the quality of data becomes the key issue. The new challenge for service providers is how to guarantee the quality of open data. In the ecosystems, uncertain data quality poses major challenges. The main contribution of this paper is the concept of the Evolvable Open Data based digital service Ecosystem (EODE), which defines the kinds of knowledge and services that are required for validating open data in digital service ecosystems. Thus, the EODE provides business potential for open data and digital service providers, as well as other actors around open data. The ecosystem capability model, knowledge management models, and the taxonomy of services to support the open data quality certification are described. Data quality certification confirms that the open data is trustworthy and its quality is good enough to be accepted for the usage of the ecosystem’s services. The five-phase open data quality certification process, according to which open data is brought to the ecosystem and certified for the usage of the digital service ecosystem members using the knowledge models and support services of the ecosystem, is also described. The initial experiences of the still ongoing validation steps are summarized, and the concept limitations and future development targets are identified.  相似文献   
12.
The harmony search (HS) method is a popular meta-heuristic optimization algorithm, which has been extensively employed to handle various engineering problems. However, it sometimes fails to offer a satisfactory convergence performance under certain circumstances. In this paper, we propose and study a hybrid HS approach, HS–PBIL, by merging the HS together with the population-based incremental learning (PBIL). Numerical simulations demonstrate that our HS–PBIL is well capable of outperforming the regular HS method in dealing with nonlinear function optimization and a practical wind generator optimization problem.  相似文献   
13.
The performance of an active power filter (APF) depends on the inverter characteristics, applied control method, and the accuracy of the reference signal generator. The accuracy of the reference generator is the most critical item in determining the performance of APFs. This paper introduces an efficient reference signal generator composed of an improved adaptive predictive filter. The performance of the proposed reference signal generator was first verified through a simulation with MATLAB. Furthermore, the application of feasibility was evaluated through experimenting with a single-phase APF prototype based on the proposed reference generator, which was implemented using the TMS320C31 floating-point signal processor. Both simulations and experimental results confirm that our reference signal generator can be used successfully in practical APFs.  相似文献   
14.
A neural networks-based negative selection algorithm in fault diagnosis   总被引:1,自引:1,他引:0  
Inspired by the self/nonself discrimination theory of the natural immune system, the negative selection algorithm (NSA) is an emerging computational intelligence method. Generally, detectors in the original NSA are first generated in a random manner. However, those detectors matching the self samples are eliminated thereafter. The remaining detectors can therefore be employed to detect any anomaly. Unfortunately, conventional NSA detectors are not adaptive for dealing with time-varying circumstances. In the present paper, a novel neural networks-based NSA is proposed. The principle and structure of this NSA are discussed, and its training algorithm is derived. Taking advantage of efficient neural networks training, it has the distinguishing capability of adaptation, which is well suited for handling dynamical problems. A fault diagnosis scheme using the new NSA is also introduced. Two illustrative simulation examples of anomaly detection in chaotic time series and inner raceway fault diagnosis of motor bearings demonstrate the efficiency of the proposed neural networks-based NSA.  相似文献   
15.
16.
Finite impulse response (FIR) predictors for polynomial signals and sinusoids are easy to design because of the available closed-form design formulae. On the other hand, those FIR predictors have two major drawbacks: the passband gain peak is usually greater than +3 dB, and a long FIR structure is needed to attain high attenuation in the stopband. Both of these characteristics cause severe problems, particularly in control instrumentation when the predictor operates inside a closed control loop. In this paper, we present a novel feedback extension scheme for FIR forward predictors. This extension makes it possible to easily design infinite impulse response (IIR) predictors with low passband ripple and high stopband attenuation. The new approach is illustrated with design examples  相似文献   
17.
The authors describe two adaptive multistage digital filters for 50/60-Hz line-frequency signal processing in zero-crossing detectors and synchronous power systems. These filters combine a median filter with adaptive predictors, either finite-impulse response (FIR)- or infinite-impulse response (IIR)-based, thus making it possible to extract the sinusoidal signals from noise and strong disturbances without phase shifting the primary frequency signal. The median filter is used as a prefilter because it can remove deep commutation notches from the waveform. Adaptation allows the filters to track the exact instantaneous line frequency and avoids the selectivity problem encountered with a fixed filter  相似文献   
18.
A class of analog continuous-time filters is introduced, having predictive properties for specified narrow-band signal models, such as low-order polynomials or sinusoids. Such filters are designed by using model transfer functions designed in the discrete-time domain. Z-to-s-domain mapping is done using the inverse bilinear transformation. The analog filters are implemented with active-RC structures, using the state-variable structure for biquads and a single-op-amp structure for real poles and zeros. The application examples include a filter for zero-crossing detectors, polynomial predictors for sensor signal smoothing, and an optimized sixth-order ramp-tracking filter for anti-aliasing and anti-imaging in digital signal processor (DSP) systems where high selectivity is required  相似文献   
19.
Soft computing (SC) is an emerging collection of methodologies which aims to exploit tolerance for imprecision, uncertainty, and partial truth to achieve robustness, tractability, and low total cost. It differs from conventional hard computing (HC) in the sense that, unlike hard computing, it is strongly based on intuition or subjectivity. Therefore, soft computing provides an attractive opportunity to represent the ambiguity in human thinking with real life uncertainty. Fuzzy logic (FL), neural networks (NN), and genetic algorithms (GA) are the core methodologies of soft computing. However, FL, NN, and GA should not be viewed as competing with each other, but synergistic and complementary instead. Considering the number of available journal and conference papers on various combinations of these three methods, it is easy to conclude that the fusion of individual soft computing methodologies has already been advantageous in numerous applications. On the other hand, hard computing solutions are usually more straightforward to analyze; their behavior and stability are more predictable; and, the computational burden of algorithms is typically either low or moderate. These characteristics. are particularly important in real-time applications. Thus, it is natural to see SC and HC as potentially complementary methodologies. Novel combinations of different methods are needed when developing high-performance, cost-effective, and safe products for the demanding global market. We present an overview of applications in which the fusion of soft computing and hard computing has provided innovative solutions for challenging real-world problems. A carefully selected list of references is considered with evaluative discussions and conclusions.  相似文献   
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号