全文获取类型
收费全文 | 214篇 |
免费 | 10篇 |
国内免费 | 9篇 |
专业分类
电工技术 | 5篇 |
综合类 | 2篇 |
化学工业 | 13篇 |
金属工艺 | 1篇 |
机械仪表 | 12篇 |
建筑科学 | 4篇 |
矿业工程 | 2篇 |
能源动力 | 25篇 |
水利工程 | 1篇 |
石油天然气 | 1篇 |
无线电 | 8篇 |
一般工业技术 | 14篇 |
冶金工业 | 1篇 |
原子能技术 | 2篇 |
自动化技术 | 142篇 |
出版年
2023年 | 21篇 |
2022年 | 22篇 |
2021年 | 25篇 |
2020年 | 19篇 |
2019年 | 18篇 |
2018年 | 10篇 |
2017年 | 12篇 |
2016年 | 15篇 |
2015年 | 11篇 |
2014年 | 23篇 |
2013年 | 11篇 |
2012年 | 2篇 |
2011年 | 10篇 |
2010年 | 5篇 |
2009年 | 8篇 |
2008年 | 7篇 |
2007年 | 4篇 |
2006年 | 2篇 |
2004年 | 1篇 |
2003年 | 1篇 |
2002年 | 1篇 |
2000年 | 2篇 |
1994年 | 1篇 |
1992年 | 1篇 |
1986年 | 1篇 |
排序方式: 共有233条查询结果,搜索用时 15 毫秒
1.
《International Journal of Hydrogen Energy》2022,47(24):12281-12292
The hydrogen pressure inside tanks and its adjacent pipes can reach up to 70 MPa in fuel cell vehicles. This is the weak links of hydrogen leakage. The diagnosis time of mainstream hydrogen leakage diagnosis method based on hydrogen concentration sensors (HCSs) is easily affected by the number and location of installed sensors. In this study, a data-driven diagnosis method is proposed for the high-pressure hydrogen leakage. Fisher discrimination analysis and linear least squares fitting are used for data preprocessing, relevance vector machine is used for pattern recognition. When the total volume of tanks is 82 L and the hydrogen leakage flow rate is larger than 2 g/s, the diagnosis accuracy of the proposed method is higher than 95% and the diagnosis time is constant. When the leakage location is far away from HCSs, the proposed method can the diagnose hydrogen leakage in a shorter time than mainstream method. 相似文献
2.
Due to the complex and harsh operation conditions, like corrosion, aging cable and static electricity, of electrical traction drive system, ground fault will generate large short circuit current to harm the key components. Effective fault diagnosis is important, but also challenging. The conventional method used for ground fault detection only takes advantage of voltage measurements of DC-link. Other measurements onboard are also available, which are correlated with the voltage measurements. Taking the correlation into account will improve the detection performance. To this end, this paper presents a data-driven solution, which makes full use of the correlation between the voltage measurements with other measurements onboard. The proposed method consists of two components: (1) a canonical correlation analysis-based fault detection method, which takes into account the correlation within measurements; (2) a fault isolation method by means of the fault direction, which can be obtained with the available faulty data stored in the long-term operation. The developed method is applied to a traction drive system. It is shown that the proposed approach is able to improve the fault detection and isolation performance significantly with respect to three performance indicators, namely fault detection rate, detection delay and correct isolation rate, in comparison with the conventional method, which only uses the voltage measurements of DC-link. 相似文献
3.
Fault detection, isolation and optimal control have long been applied to industry. These techniques have proven various successful theoretical results and industrial applications. Fault diagnosis is considered as the merge of fault detection (that indicates if there is a fault) and fault isolation (that determines where the fault is), and it has important effects on the operation of complex dynamical systems specific to modern industry applications such as industrial electronics, business management systems, energy, and public sectors. Since the resources are always limited in real-world industrial applications, the solutions to optimally use them under various constraints are of high actuality. In this context, the optimal tuning of linear and nonlinear controllers is a systematic way to meet the performance specifications expressed as optimization problems that target the minimization of integral- or sum-type objective functions, where the tuning parameters of the controllers are the vector variables of the objective functions. The nature-inspired optimization algorithms give efficient solutions to such optimization problems. This paper presents an overview on recent developments in machine learning, data mining and evolving soft computing techniques for fault diagnosis and on nature-inspired optimal control. The generic theory is discussed along with illustrative industrial process applications that include a real liquid level control application, wind turbines and a nonlinear servo system. New research challenges with strong industrial impact are highlighted. 相似文献
4.
Simon Alexanderson Gustav Eje Henter Taras Kucherenko Jonas Beskow 《Computer Graphics Forum》2020,39(2):487-496
Automatic synthesis of realistic gestures promises to transform the fields of animation, avatars and communicative agents. In off-line applications, novel tools can alter the role of an animator to that of a director, who provides only high-level input for the desired animation; a learned network then translates these instructions into an appropriate sequence of body poses. In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters. In this paper we address some of the core issues towards these ends. By adapting a deep learning-based motion synthesis method called MoGlow, we propose a new generative model for generating state-of-the-art realistic speech-driven gesticulation. Owing to the probabilistic nature of the approach, our model can produce a battery of different, yet plausible, gestures given the same input speech signal. Just like humans, this gives a rich natural variation of motion. We additionally demonstrate the ability to exert directorial control over the output style, such as gesture level, speed, symmetry and spacial extent. Such control can be leveraged to convey a desired character personality or mood. We achieve all this without any manual annotation of the data. User studies evaluating upper-body gesticulation confirm that the generated motions are natural and well match the input speech. Our method scores above all prior systems and baselines on these measures, and comes close to the ratings of the original recorded motions. We furthermore find that we can accurately control gesticulation styles without unnecessarily compromising perceived naturalness. Finally, we also demonstrate an application of the same method to full-body gesticulation, including the synthesis of stepping motion and stance. 相似文献
5.
Information and communication technologies combined with in-situ sensors are increasingly being used in the management of urban drainage systems. The large amount of data collected in these systems can be used to train a data-driven soft sensor, which can supplement the physical sensor. Artificial Neural Networks have long been used for time series forecasting given their ability to recognize patterns in the data. Long Short-Term Memory (LSTM) neural networks are equipped with memory gates to help them learn time dependencies in a data series and have been proven to outperform other type of networks in predicting water levels in urban drainage systems. When used for soft sensing, neural networks typically receive antecedent observations as input, as these are good predictors of the current value. However, the antecedent observations may be missing due to transmission errors or deemed anomalous due to errors that are not easily explained. This study quantifies and compares the predictive accuracy of LSTM networks in scenarios of limited or missing antecedent observations. We applied these scenarios to an 11-month observation series from a combined sewer overflow chamber in Copenhagen, Denmark. We observed that i) LSTM predictions generally displayed large variability across training runs, which may be reduced by improving the selection of hyperparameters (non-trainable parameters); ii) when the most recent observations were known, adding information on the past did not improve the prediction accuracy; iii) when gaps were introduced in the antecedent water depth observations, LSTM networks were capable of compensating for the missing information with the other available input features (time of the day and rainfall intensity); iv) LSTM networks trained without antecedent water depth observations yielded larger prediction errors, but still comparable with other scenarios and captured both dry and wet weather behaviors. Therefore, we concluded that LSTM neural network may be trained to act as soft sensors in urban drainage systems even when observations from the physical sensors are missing. 相似文献
6.
In the eyes of many control scientists, the theory of the scenario approach is a tool for determining the sample size in certain randomized control-design methods, where an uncertain variable is replaced by a random sample of scenarios. This point of view is rooted in the history of the scenario approach and stands on a long track record of successful applications. However, in the last two decades the theory of the scenario approach has gone beyond its original motivations and applications, and has unveiled some fundamental relationships between the complexity of a design and its generalization capabilities. The new knowledge brought by the theory provides a solid ground for a framework where data can be exploited in a flexible and wise manner throughout a large variety of engineering activities. By this article we aim at providing an access point to a set of state-of-the-art results in the theory of the scenario approach that can be valuable to target important challenges in modern control-design and decision-making at large. In the first part of the article, we introduce a set-up for decision-making where the role of prior knowledge and user preferences can, and should, be distinguished from the role of data. Then, we show that the theory of the scenario approach offers a platform for conjugating heuristic approaches, which in complex contexts are unavoidably based on incomplete and possibly imprecise information, with a solid theory for certifying the validity of the output of the decision process. 相似文献
7.
8.
9.
The rapid growth of data and the requirement of designers to track massive data to obtain design stimuli have posed challenges to conceptual design, thereby promoting the development of data-driven design. Concept networks precisely capture design information from a large volume of unstructured and heterogeneous textual data and saliently decrease time and labor cost for designers to read texts, which creates new opportunities for developing a smart product design system. To advance data-driven design, this study proposes the novel function-structure concept network (FSCN) construction method, which combines sentence parsing and word/phrase extraction to integrate functional and structural information. Furthermore, a network analysis method is proposed to explore design information associations that contain both explicit and implicit associations together and thereby recommend them simultaneously to designers as inspirational stimuli to support design ideation. This approach can enhance designers' capabilities to build associations between design information, conceive new design ideas during conceptual design, and increase creativity for solving design problems. The proposed FSCN construction and analysis method can be used as an auxiliary tool to visualize associations among design information so as to inspire idea generation in the early stage of conceptual design. An illustrative example was used to validate the practicability of the proposed methodology. The code of the proposed method is available at https://github.com/KWflyer/FSCN. 相似文献
10.
Typical traffic modeling approaches, such as network-based methods and simulation models, have been shown inadequate for urban-scale studies due to the fidelity issue of models. As a go-around, data-driven models have received increasing attention recently. However, most data-driven methods have been restricted by their data source and cannot be scaled up to manage urban- and regional-scale studies. Regarding this issue, this research proposes a pipeline that collects traffic data from online map vendors to bypass data limitations for large-scale studies. The study consists of two experiments: 1) recognizing the dominant traffic patterns of cities and 2) site-specific predictions of typical traffic or the most probable locations of patterns of interests. The experiments were conducted on 32 Swiss cities using traffic data that were collected for a two-month period. The results show that dominant patterns can be extracted from the temporal traffic data, and similar patterns exist not only in various parts of a city but also in different cities. Moreover, the results reveal that a country-level lockdown decreased traffic congestions in regional highways but increased those connections near the city centers and the country borders. 相似文献