首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cross flow phenomena between connected sub-channels are studied by means of numerical simulations based on lattice-Boltzmann discretization. The cross (that is lateral) transfer is largely due to macroscopic instabilities developing at two shear layers. The characteristic size and advection velocity of the instabilities favorably compare with experimental results from the literature on a geometrically similar system. The strength of the cross flow strongly depends on the Reynolds number, with cross flow developing only for Reynolds numbers (based on macroscopic flow quantities) larger than 1360. Mass transfer between the sub-channels has been assessed by adding a passive scalar to the flow and solving its transport equation. As a result of the intimate connection of cross flow and lateral mass transfer, also the mass transfer coefficient is a pronounced function of Re.  相似文献   

2.
Maintenance technologies have been progressed from a time-based to a condition-based manner. The fundamental idea of condition-based maintenance (CBM) is built on the real-time diagnosis of impending failures and/or the prognosis of residual lifetime of equipment by monitoring health conditions using various sensors. The success of CBM, therefore, hinges on the capability to develop accurate diagnosis/prognosis models. Even though there may be an unlimited number of methods to implement models, the models can normally be classified into two categories in terms of their origins: using physical principles or historical observations. We have focused on the latter method (sometimes referred as the empirical model based on statistical learning) because of some practical benefits such as context-free applicability, configuration flexibility, and customization adaptability. While several pilot-scale systems using empirical models have been applied to work sites in Korea, it should be noted that these do not seem to be generally competitive against conventional physical models. As a result of investigating the bottlenecks of previous attempts, we have recognized the need for a novel strategy for grouping correlated variables such that an empirical model can accept not only statistical correlation but also some extent of physical knowledge of a system. Detailed examples of problems are as follows: (1) missing of important signals in a group caused by the lack of observations, (2) problems of signals with the time delay, and (3) problems of optimal kernel bandwidth. This paper presents an improved statistical learning framework including the proposed strategy and case studies illustrating the performance of the method.  相似文献   

3.

从智能手机、智能手表等小型终端智能设备,到智能家居、智能网联车等大型应用,再到智慧生活、智慧农业等,人工智能已经逐渐步入人们的生活,改变传统的生活方式. 各种各样的智能设备会产生海量的数据,传统的云计算模式已无法适应新的环境. 边缘计算在靠近数据源的边缘侧实现对数据的处理,可以有效降低数据传输时延,减轻网络传输带宽压力,提高数据隐私安全等. 在边缘计算架构上搭建人工智能模型,进行模型的训练和推理,实现边缘的智能化,对于当前社会至关重要. 由此产生的新的跨学科领域——边缘智能(edge intelligence,EI),开始引起了广泛的关注. 全面调研了边缘智能相关研究:首先,介绍了边缘计算、人工智能的基础知识,并引出了边缘智能产生的背景、动机及挑战. 其次,分别从边缘智能所要解决的问题、边缘智能模型研究以及边缘智能算法优化3个角度对边缘智能相关技术研究展开讨论. 然后,介绍边缘智能中典型的安全问题. 最后,从智慧工业、智慧生活及智慧农业3个层面阐述其应用,并展望了边缘智能未来的发展方向和前景.

  相似文献   

4.
目前对地震前兆异常的研究主要集中在“热”和“电”等方面,很少涉及基准站的GPS数据。然而,已经有学者证明震中附近基准站的GPS时间序列坐标数据中也蕴含着大地震的前兆信息。针对2001~2010年间美国本土发生的具有代表性的三个地震进行了研究,首次将Martingale理论运用于GPS数据处理,提出一种异常提取算法,进而对地震前后,震中附近多个基准站的GPS数据进行分析。实验结果表明算法能够有效的反映大地震前后GPS数据中异常的变化趋势,为使用GPS数据对大地震进行预报提供了更多可能。  相似文献   

5.
《Information & Management》2016,53(2):265-278
With the increasing popularity of mobile applications, increasingly more e-commerce websites are providing mobile shopping services that enable their consumers to access their products and services through an additional mobile channel. The question arises here as to whether the new channel brings new sales or merely shifts consumers from the web to the mobile channel. Using the 2½-year transaction data obtained from an e-commerce company that expanded its web service onto a mobile platform, we investigated the impact of the newly introduced mobile channel on the sales of the incumbent web channel, and whether it could stimulate new consumption from consumers. Our empirical results indicate that after the adoption of the mobile channel, the purchases on the web channel were slightly cannibalized; however, the consumers’ purchases increased overall, suggesting that the positive synergy effect of the new channel overrode the negative cannibalization effect. Our investigation contributes to multichannel e-commerce literature by empirically testing the cross-channel effects of a new mobile channel and also providing insights for e-retailers interested in introducing a new mobile channel.  相似文献   

6.
One of the most difficult challenges for speaker recognition is dealing with channel variability. In this paper, several new cross-channel compensation techniques are introduced for a Gaussian mixture model—universal background model (GMM-UBM) speaker verification system. These new techniques include wideband noise reduction, echo cancellation, a simplified feature-domain latent factor analysis (LFA) and data-driven score normalization. A novel dynamic Gaussian selection algorithm is developed to reduce the feature compensation time by more than 60% without any performance loss. The performance of different techniques across varying channel train/test conditions are presented and discussed, finding that speech enhancement, which used to be neglected for telephone speech, is essential for cross-channel tasks, and the channel compensation techniques developed for telephone channel speech also perform effectively. The per microphone performance analysis further shows that speech enhancement can boost the effects of other techniques greatly, especially on channels with larger signal-to-noise ratio (SNR) variance. All results are presented on NIST SRE 2006 and 2008 data, showing a promising performance gain compared to the baseline. The developed system is also compared with other state-of-the-art speaker verification systems. The result shows that the developed system can obtain comparable or even better performance but consumes much less CPU time, making it more suitable for practical use.  相似文献   

7.
陈力  丁世飞  于文家 《计算机应用》2020,40(12):3451-3457
针对传统卷积神经网络参数量过多、计算复杂度高的问题,提出了基于跨通道交叉融合和跨模块连接的轻量级卷积神经网络架构C-Net。首先,提出了跨通道交叉融合的方法,它在一定程度上克服了分组卷积中各分组之间存在缺乏信息流动的问题,简单高效地实现了不同分组之间的信息通信;其次,提出了一种跨模块连接的方法,它克服了传统轻量级架构中各基本构建块之间彼此独立的缺点,实现了同一阶段内具有相同分辨率特征映射的不同模块之间的信息融合,从而增强了特征提取能力;最后,基于提出的两种方法设计了一种新型的轻量级卷积神经网络架构C-Net。C-Net在Food_101数据集上的准确率为69.41%,在Caltech_256数据集上的准确率为63.93%。实验结果表明,与目前先进的轻量级卷积神经网络模型相比,C-Net降低了存储开销和计算复杂度。在Cifar_10数据集上的消融实验验证了所提出的两种方法的有效性。  相似文献   

8.
陈力  丁世飞  于文家 《计算机应用》2005,40(12):3451-3457
针对传统卷积神经网络参数量过多、计算复杂度高的问题,提出了基于跨通道交叉融合和跨模块连接的轻量级卷积神经网络架构C-Net。首先,提出了跨通道交叉融合的方法,它在一定程度上克服了分组卷积中各分组之间存在缺乏信息流动的问题,简单高效地实现了不同分组之间的信息通信;其次,提出了一种跨模块连接的方法,它克服了传统轻量级架构中各基本构建块之间彼此独立的缺点,实现了同一阶段内具有相同分辨率特征映射的不同模块之间的信息融合,从而增强了特征提取能力;最后,基于提出的两种方法设计了一种新型的轻量级卷积神经网络架构C-Net。C-Net在Food_101数据集上的准确率为69.41%,在Caltech_256数据集上的准确率为63.93%。实验结果表明,与目前先进的轻量级卷积神经网络模型相比,C-Net降低了存储开销和计算复杂度。在Cifar_10数据集上的消融实验验证了所提出的两种方法的有效性。  相似文献   

9.
Keyword-based ads are becoming the dominant form of advertising online as they enable customization and tailoring of messages relevant to potential consumers. Two prominent channels within this sphere are the search channel and the content channel. We empirically examine the interaction between these two channels. Our results indicate significant cannibalization across the two channels as well as significant diminishing returns to impressions within each channel. This suggests that under certain conditions both channels may need to be used to optimize returns to advertising both for advertisers and service providers such as Google. Our game theoretic analysis which builds upon our empirical findings reveals that for intermediate budget values it is optimal to use both channels whereas for very low (very high) budget values it is optimal to use only the content (search) channel. Further as budget increases the advertiser should offer more for ads displayed on the search channel to optimally incentivize the service provider.  相似文献   

10.
Journal of Computer Virology and Hacking Techniques - An intrusion detection system inspired by the human immune system is described: a custom artificial immune system that monitors a local area...  相似文献   

11.
Understanding the behavior of large scale distributed systems is generally extremely difficult as it requires to observe a very large number of components over very large time. Most analysis tools for distributed systems gather basic information such as individual processor or network utilization. Although scalable because of the data reduction techniques applied before the analysis, these tools are often insufficient to detect or fully understand anomalies in the dynamic behavior of resource utilization and their influence on the applications performance. In this paper, we propose a methodology for detecting resource usage anomalies in large scale distributed systems. The methodology relies on four functionalities: characterized trace collection, multi‐scale data aggregation, specifically tailored user interaction techniques, and visualization techniques. We show the efficiency of this approach through the analysis of simulations of the volunteer computing Berkeley Open Infrastructure for Network Computing architecture. Three scenarios are analyzed in this paper: analysis of the resource sharing mechanism, resource usage considering response time instead of throughput, and the evaluation of input file size on Berkeley Open Infrastructure for Network Computing architecture. The results show that our methodology enables to easily identify resource usage anomalies, such as unfair resource sharing, contention, moving network bottlenecks, and harmful short‐term resource sharing. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
《Computer Networks》2008,52(14):2663-2676
In this paper, we focus on passive measurements of TCP traffic. We propose a heuristic technique to classify TCP anomalies, i.e., segments that have a sequence number different from the expected one, such as out-of-sequence and duplicate segments. Since TCP is a closed-loop protocol that infers network conditions from packet losses and reacts accordingly, the possibility of carefully distinguishing the causes of anomalies in TCP traffic is very appealing and may be instrumental to understand TCP behavior in real environments. We apply the proposed heuristic to traffic traces collected at both network edges and backbone links. By comparing results obtained from traces collected over several years, we observe some phenomena such as the impact of the introduction of TCP SACK which reduces the unnecessary retransmissions, the large percentage of network reordering, etc. By further studying the statistical properties of TCP anomalies, we find that, while their aggregate exhibits long-range dependence, anomalies suffered by individual long-lived flows are on the contrary uncorrelated. Interestingly, no dependence on the actual link load is observed.  相似文献   

13.
This paper aims to address the problem of anomaly detection and discrimination in complex behaviours, where anomalies are subtle and difficult to detect owing to the complex temporal dynamics and correlations among multiple objects’ behaviours. Specifically, we decompose a complex behaviour pattern according to its temporal characteristics or spatial-temporal visual contexts. The decomposed behaviour is then modelled using a cascade of Dynamic Bayesian Networks (CasDBNs). In contrast to existing standalone models, the proposed behaviour decomposition and cascade modelling offers distinct advantage in simplicity for complex behaviour modelling. Importantly, the decomposition and cascade structure map naturally to the structure of complex behaviour, allowing for a more effective detection of subtle anomalies in surveillance videos. Comparative experiments using both indoor and outdoor data are carried out to demonstrate that, in addition to the novel capability of discriminating different types of anomalies, the proposed framework outperforms existing methods in detecting durational anomalies in complex behaviours and subtle anomalies that are difficult to detect when objects are viewed in isolation.  相似文献   

14.
Analysis of anomalies that occur during operations is an important means of improving the quality of current and future software. Although the benefits of anomaly analysis of operational software are widely recognized, there has been relatively little research on anomaly analysis of safety-critical systems. In particular, patterns of software anomaly data for operational, safety-critical systems are not well understood. We present the results of a pilot study using orthogonal defect classification (ODC) to analyze nearly two hundred such anomalies on seven spacecraft systems. These data show several unexpected classification patterns such as the causal role of difficulties accessing or delivering data, of hardware degradation, and of rare events. The anomalies often revealed latent software requirements that were essential for robust, correct operation of the system. The anomalies also caused changes to documentation and to operational procedures to prevent the same anomalous situations from recurring. Feedback from operational anomaly reports helped measure the accuracy of assumptions about operational profiles, identified unexpected dependencies among embedded software and their systems and environment, and indicated needed improvements to the software, the development process, and the operational procedures. The results indicate that, for long-lived, critical systems, analysis of the most severe anomalies can be a useful mechanism both for maintaining safer, deployed systems and for building safer, similar systems in the future.  相似文献   

15.
Loda: Lightweight on-line detector of anomalies   总被引:1,自引:0,他引:1  
  相似文献   

16.
The increasingly widespread use of large-scale 3D virtual environments has translated into an increasing effort required from designers, developers and testers. While considerable research has been conducted into assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. In the work presented in this paper, two novel neural network-based approaches are presented to predict the correct visualization of 3D content. Multilayer perceptrons and self-organizing maps are trained to learn the normal geometric and color appearance of objects from validated frames and then used to detect novel or anomalous renderings in new images. Our approach is general, for the appearance of the object is learned rather than explicitly represented. Experiments were conducted on a game engine to determine the applicability and effectiveness of our algorithms. The results show that the neural network technology can be effectively used to address the problem of automatic and reliable visual testing of 3D virtual environments.  相似文献   

17.
A Δ-shaped data structure that is more compact and efficient under certain conditions than existing ones is proposed. The method has excellent storage efficiency (6E, with E being the total number of edges) and provides improved access time in a virtual memory environment. The discussion covers previous work, virtual memory and databases, determining record access costs, constant-time data schemes, multiple entities, and implementation methods  相似文献   

18.
Businesses are naturally interested in detecting anomalies in their internal processes, because these can be indicators for fraud and inefficiencies. Within the domain of business intelligence, classic anomaly detection is not very frequently researched. In this paper, we propose a method, using autoencoders, for detecting and analyzing anomalies occurring in the execution of a business process. Our method does not rely on any prior knowledge about the process and can be trained on a noisy dataset already containing the anomalies. We demonstrate its effectiveness by evaluating it on 700 different datasets and testing its performance against three state-of-the-art anomaly detection methods. This paper is an extension of our previous work from 2016 (Nolle et al. in Unsupervised anomaly detection in noisy business process event logs using denoising autoencoders. In: International conference on discovery science, Springer, pp 442–456, 2016). Compared to the original publication we have further refined the approach in terms of performance and conducted an elaborate evaluation on more sophisticated datasets including real-life event logs from the Business Process Intelligence Challenges of 2012 and 2017. In our experiments our approach reached an \(F_1\) score of 0.87, whereas the best unaltered state-of-the-art approach reached an \(F_1\) score of 0.72. Furthermore, our approach can be used to analyze the detected anomalies in terms of which event within one execution of the process causes the anomaly.  相似文献   

19.
《Information & Management》1995,28(3):177-184
Expert systems are emerging as a powerful technology for solving many problems previously requiring human experts. However, maintenance has been identified as a major difficulty in expert system implementations. Surprisingly, the problem of maintenance has only recently begun to receive attention in expert systems research, though it has long been an issue in databases. Databases are in a constant state of change, and the prevention of maintenance anomalies is essential. As similar maintenance operations are performed on rule bases, this paper investigates techniques to avoid maintenance anomalies in expert system rule bases. The result is an expert system rule base structure that is appropriate for volatile production use. In addition to lower maintenance demands, this approach favorably impacts on verification, computational efficiency, and storage requirements.  相似文献   

20.
《Knowledge》1999,12(7):341-353
Despite the fact that there has been a surge of publications in verification and validation of knowledge-based systems and expert systems in the past decade, there are still gaps in the study of verification and validation (V&V) of expert systems, not the least of which is the lack of appropriate semantics for expert system programming languages. Without a semantics, it is hard to formally define and analyze knowledge base anomalies such as inconsistency and redundancy, and it is hard to assess the effectiveness of V&V tools, methods and techniques that have been developed or proposed. In this paper, we develop an approximate declarative semantics for rule-based knowledge bases and provide a formal definition and analysis of knowledge base inconsistency, redundancy, circularity and incompleteness in terms of theories in the first order predicate logic. In the paper, we offer classifications of commonly found cases of inconsistency, redundancy, circularity and incompleteness. Finally, general guidelines on how to remedy knowledge base anomalies are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号