首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Machine Translation - Existing work on the animation of signing avatars often relies on pure procedural techniques or on the playback of Motion Capture (MoCap) data. While the first solution...  相似文献   

2.
针对现有的基于深度学习的神经网络模型通常都是对单一的语料库进行训练学习,提出了一种大规模的多语料库联合学习的中文分词方法。语料库分别为简体中文数据集(PKU、MSRA、CTB6)和繁体中文数据集(CITYU、AS),每一个数据集输入语句的句首和句尾分别添加一对标志符。应用BLSTM(双向长短时记忆模型)和CRF(条件随机场模型)对数据集进行单独训练和多语料库共同训练的实验,结果表明大规模的多语料库共同学习训练能取得良好的分词效果。  相似文献   

3.
A conceptual schema can be viewed as a language to describe the phenomena in a system to be modelled, i.e. a set of derivation rules and integrity constraints as well as a set of event-rules describing the behaviour of an object system. In this paper, we investigate the relationship between the Lyee software requirements concepts with various constructs in conceptual modelling. Within our work we choose the Unified Modelling Language (UML) as a modelling notation for explaining conceptual models. The result obtained models a fully expressive set of UML and First Order Logic constructs mapped into Lyee concepts.  相似文献   

4.
5.
针对汉语统计参数语音合成中的上下文相关标注生成,设计了声韵母层、音节层、词层、韵律词层、韵律短语层和语句层6层上下文相关的标注格式。对输入的中文语句进行文本规范并利用语法分析获得语句的结构和分词信息;通过字音转换获得每个汉字的声韵母及声调;利用TBL(Transformation-Based error driven Learning)算法预测输入文本的韵律词边界和韵律短语边界。在此基础上,获得输入文本中每个汉字的声韵母信息及其上下文结构信息,从而产生统计参数语音合成所需的上下文相关标注。设计了一个以声韵母为合成基元的普通话的基于隐Markov模型(HMM)的统计参数语音合成系统,通过主、客观实验评测了不同标注信息对合成语音音质的影响,结果表明,上下文相关的标注信息越丰富,合成语音的音质越好。  相似文献   

6.
目前主流的序列标注问题是基于循环神经网络(RNN)实现的。针对RNN和序列标注问题进行研究,提出了一种改进型的多层双向长短时记忆(BLSTM)网络,该网络每层的BLSTM都有一次信息融合,输出包含更多的上下文信息。另外找到一种基于序列标注的可以并行执行中文分词和标点预测的联合任务方法。在公开的数据集上的实验结果表明,所提出的改进型的多层BLSTM网络模型性能优越,提升了中文分词和标点预测的分类精度;在需要完成中文分词和标点预测两项任务时,联合任务方法能够大幅地降低系统复杂度;新的模型及基于该模型的联合任务方法也可应用到其他序列标注任务中。  相似文献   

7.
为了充分利用多核处理器的硬件资源和计算能力,提出了多核并行编程技术在中文分词程序中的优化方案.根据中文分词最大正向匹配算法的特点,由传统的串行程序,改为并行程序.利用多核并行编程模式的思想,设计了一个混合并行编程模式,通过Intel的性能分析工具,找出了该算法的热点和瓶颈,对其进行优化.实验结果表明,优化过后的执行时间较原来串行程序的执行时间缩短了50%~60%,同时提高了程序的加速性能,取得了良好的效果.  相似文献   

8.
Word alignment is a crucial component in applications that use bilingual resources. Statistical methods are widely used because they are portable and allow simple system building. However, pure statistical methods often incorrectly align functional words in the English–Korean language pair due to differences in the typology of the languages and a lack of knowledge. Knowledge is inevitably required to correct errors and to improve word alignment quality. In this paper, we introduce an effective method that uses an iterative process to incorporate knowledge into the word alignment system. The method achieved significant improvements in word alignment and its application: statistical machine translation.  相似文献   

9.
熔融沉积制造(FDM)是一种用填充方式来得到每层截面的快速成型工艺,由于成型轨迹的走向直接影响到成型件的外观和成型效率,因此选择一种好的成型算法格外重要。在目前现有的成型轨迹中,复合式扫描法由于在制件精度和加工效率上的诸多优势,成为研究的热点。基于分形填充和偏置填充本身所具有的优点,提出了一种复合优化轨迹的方法,并用分而治之算法对生成的路径进行了最大程度的优化,以减少喷头的开关次数和拉丝现象,保证制件的成型精度和强度,并且已成功应用在实际加工中。  相似文献   

10.
随着嵌入式系统应用的发展,高效和小型化是其主要特点,这对目标代码质量的要求也越来越高。针对自行设计的32位具有RISC DSP结构的媒体处理器MD 32特有的体系结构特点,提出C编译器支持的,在汇编代码级通过指令调度和转换指令操作数及其类型的代码优化方法,实现输出高效的并行指令。统计数据表明:代码执行效率平均可以提高15%,而代码密度平均提高12%。  相似文献   

11.
Identifying word evolution is important for understanding cultural and societal changes. The key to accurate identification of word evolution is to distinguish word semantics. Recently methods based on low-dimensional embedding representation for words were proposed but they require the alignment of word embeddings across different time periods. This process is computationally expensive, prohibitively time consuming and suffering from contextual variability. In this paper, we propose a method to learn low-dimensional time-aware embeddings using both statistical and POS (part-of-speech) tagging information of words. Besides, the proposed method bypasses the computationally expensive step of aligning the word embeddings by tagging each word with a time prefix and encoding them into the common vector space. The learnt temporal embeddings better reveal semantic changes over time. A comprehensive experiment is conducted on Google Books N-gram (throughout 100 years). When compared with other three top-performing temporal embedding methods (PPMI, SVD, SGNS), our method achieves state-of-the-art in terms of time complexity, precision, recall, F1-score and the number of words identified to have changed in meaning, respectively. Additionally, we provide an intuitive illustration of the semantic evolution of the interesting words identified with our method.  相似文献   

12.
This paper aims to reduce the amount of prebuffering required to ensure a maximum video continuity in streaming. Current approaches do this by slowing the playout frame rate of the decoder, this is known as adaptive media playout (AMP). However, doing this introduces playout distortion to the viewers as the video is played slower than its natural playout rate. We approach this by proposing a frame rate control scheme that jointly adjusts the encoder frame generation rate of the encoder and the playout frame rate of the decoder. In addition to using AMP to improve video continuity, we also allow the encoder to increase the encoder frame generation rate. This means the encoder will be sending more frames to the decoder to quickly increase the number of frames available at the playback buffer, thus lowering the chance of buffer underflow which causes discontinuity in video playback. At the same time, the increase in the number of frames at the playback buffer may mean that the decoder does not need to use AMP to delay the playback, thus lowering the playback distortion. However, the increase in encoder frame generation rate comes at a price because frame quality will need to decrease in order to meet the constraint on available network bandwidth. This implies that the scheme needs to find the optimal trade-off between frame quality, playout distortion and video continuity. To do that, we characterize the frame rate control problem using Lyapunov optimization. We then systematically derive the optimization policies. We also show that these policies can be decoupled into separate encoder and decoder optimization policies, thus allowing for a distributed implementation. Simulation results show significant reductions in the prebuffering requirements over a scheme that perform no frame rate control and lower playout distortions compared to the AMP schemes, while exhibiting a modest drop in frame quality.  相似文献   

13.
Boubekki  Ahcène  Kampffmeyer  Michael  Brefeld  Ulf  Jenssen  Robert 《Machine Learning》2021,110(7):1901-1937
Machine Learning - Deep embedded clustering has become a dominating approach to unsupervised categorization of objects with deep neural networks. The optimization of the most popular methods...  相似文献   

14.
15.
A joint optimization problem for solving area traffic control and network flow is investigated. A bilevel programming is used to formulate this joint optimization problem where the network flow following Wardrop's principles can be obtained by solving traffic assignment problems. In this paper, we present a solution approach for jointly optimizing the area traffic control and network flow on the basis of a newly presented algorithm for concurrent flow (Comput. Oper. Res. (2004) in press). We propose three kinds of formulations for this joint optimization problem and present a gradient-based method to effectively solve this problem via a mixture of locally optimal search and global search heuristic where a near global optimum may be found. Numerical comparisons are made for the values of performance index achieved by the joint optimization problem with system optimal flow and those did by equilibrium flow at various sets of initial signal settings. Substantially good results have demonstrated the robustness of the proposed algorithm in solving both system optimal and user equilibrium flow for the joint optimization problem at large-scale networks.  相似文献   

16.
超大规模序列比对计算的并行优化   总被引:1,自引:0,他引:1  
针对生物信息学研究中的超大规模序列比对计算问题进行了研究,解决了现有的e-PCR软件包在处理小麦基因引物扩增比对任务中存在的内存瓶颈、I/O瓶颈和计算时间瓶颈问题,利用数据和任务分割的基本方法,使其最关键的引物与模板的比对计算能够大规模并行,进而采用基于主从通信模式的MPI通信框架进行编程实现,并从任务的缩减、负载平衡、容错和多作业并发等方面进行了优化,最终在百万亿次超级计算机上顺利实现了千核级大规模并行计算,在数十日内即可完成原本预期需要数年的小麦序列扩增比对计算.  相似文献   

17.
Based on the analyses of the mechanical features of rotary kiln with multi-supports, a general mechanical model for indeterminate kiln with variable bending rigidities, arbitrary supports and complex loads is established. From this model, the equations of the rotational angle and the deformation are deduced, the general matrix and procedure are developed. The correlation between the roller forces and axis deflections of no. 2 rotary kiln is derived. To improve kiln performance by kiln axis alignment, taking roller forces equilibrium and relative axis deflection minimum as the optimization goal, considering the fuzzy constraints of axis alignment, the fuzzy optimization model of kiln axis alignment is set up. The optimization model is solved with the max–min approach. The results show that fuzzy optimization alignment of rotary kiln can make kiln axis as straight as possible and can distribute kiln loads equally.  相似文献   

18.
This paper presents automatic tools aimed at the generation and adaptation of unstructured tetrahedral meshes in the context of composite or heterogeneous geometry. These tools are primarily intended for applications in the domain of topology optimization methods but the approach introduced presents great potential in a wider context. Indeed, various fields of application can be foreseen for which meshing heterogeneous geometry is required, such as finite element simulations (in the case of heterogeneous materials and assemblies, for example), animation and visualization (medical imaging, for example). Using B-Rep concepts as well as specific adaptations of advancing front mesh generation algorithms, the mesh generation approach presented guarantees, in a simple and natural way, mesh continuity and conformity across interior boundaries when trying to mesh a composite domain. When applied in the context of topology optimization methods, this approach guarantees that design and non-design sub-domains are meshed so that finite elements are tagged as design and non-design elements and so that continuity and conformity are guaranteed at the interface between design and non-design sub-domains. The paper also presents how mesh transformation and mesh smoothing tools can be successfully used when trying to derive a functional shape from raw topology optimization results.  相似文献   

19.
Multiple combinations of hardware and network components can be selected to design an information technology (IT) infrastructure that satisfies requirements. The professional criterion to deal with these degrees of freedom is cost minimization. However, a scientific approach has been rarely applied to cost minimization, especially for the joint optimization of hardware and network systems. This paper provides an algorithm combining hardware and network design issues in a single cost-minimization problem. The algorithm is empirically verified with a database of costs that has also been built as part of this research. Results show how a joint cost-minimization approach provides significant cost reductions with respect to the separate optimization of hardware and network sub systems. This indicates that the generally adopted divide et impera professional principle can be cost inefficient with respect to a rigorous scientific approach. Sensitivity and scalability analyses show that the algorithm's results are also robust and do not involve cost disadvantages over time.  相似文献   

20.
Over the past decade, the usage of machine learning (ML) techniques have increased substantially in different applications. Federated Learning (FL) refers to collaborative techniques that avoid the exchange of raw data between the nodes in a distributed training task. This addresses important issues such as data privacy, energy consumption, and the limited availability of clean spectral slots. In this work, we investigate the performance of FL updates with edge devices connected to a leading device (LD) with practical wireless links, where uplink updates from the edge devices to the LD are shared without orthogonalizing the resources. In particular, we adopt a non-orthogonal multiple access (NOMA) uplink scheme, and analytically investigate its effect on the convergence round (CR) and the accuracy of the FL model. Moreover, we formulate an optimization problem that aims at minimizing the CR, and further guarantees communication fairness between the users while considering the per-device energy consumption figures and the accuracy of the realized global FL model. Monte-Carlo simulations prove the reliability of our derived analytical expressions and reveal the importance of adopting a joint optimization approach that demonstrates a significant reduction in communication latency, while taking into account user fairness in the NOMA network, improving the energy consumption figures and yielding acceptable accuracy levels when compared with several baselines.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号