首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
针对当前矿产资源分布系统存在定位范围小、精度低及比特率低的问题,提出并设计了基于GIS矿产资源分布区域定位系统。通过分析定位系统构架原理,对定位系统进行预处理;系统硬件主要包括SJA1000芯片、CAN总线、节点定位及晶振电路;通过搜索模型、信息量加权模型、定位数学模型组成系统软件部分,实现矿产资源分布区域定位。实验结果表明,该系统有效扩大了定位范围,提高了定位精度及比特率。  相似文献   

2.
In this contribution, we present a survey on the radio resource allocation techniques in orthogonal frequency division multiplexing (OFDM) and orthogonal frequency division multiple access (OFDMA) systems. This problem goes back to 1960s and that is related to properly and efficiently allocate the radio resources, namely subcarriers and power. We start by overviewing the main open issues in OFDM. Then, we describe the problem formulation in OFDMA, and we review the existing solutions to allocate the radio resources. The goal is to discuss the fundamental concepts and relevant features of different radio resource management criteria, including water-filling, max–min fairness, proportional fairness, cross-layer optimization, utility maximization, and game theory, also including a toy example with two terminals to compare the performance of the different schemes. We conclude the survey with a review of the state-of-the-art in resource allocation for next-generation wireless networks, including multicellular systems, cognitive radio, and relay-assisted communications, and we summarize advantages and common problems of the existing solutions available in the literature. The distinguishing feature of this contribution is a tutorial-style introduction to the fundamental problems in this area of research, intended for beginners on this topic.  相似文献   

3.
将容器云平台资源整体能耗最低作为目标,设计基于贪心算法的容器云资源低能耗部署方法。在物理主机与虚拟机对应、虚拟机与容器对应等约束条件下,结合静态和动态两个部分构建容器云资源能耗模型。通过资源虚拟化与去除冗余两个步骤,得到容器云资源的整合结果。检测物理机负载状态,确定虚拟机迁移源物理机和目标物理机,利用贪心算法均衡调度容器云资源负载,最终通过容器云资源编排重组,实现容器云资源低能耗部署。通过与传统部署方法的对比得出结论:在优化设计部署方法下,容器云资源的利用率和负载均衡度得到明显提升,能量损耗明显下降。  相似文献   

4.
夏琦  王忠群 《计算机应用》2012,32(11):3067-3070
因特网上的资源具有不确定性、随机性,需要考虑如何保证网构软件系统在运行中满足资源需求。使用随机性资源接口自动机对软件构件的行为进行形式化建模,并使用随机性资源接口自动机网络描述构件组装系统的组合行为;在资源不确定的情况下,检验组合系统是否满足资源约束,并提出基于可达图的相应算法。给出了一个实例网上书店系统,并用模型检测工具Spin验证了模型的正确性。  相似文献   

5.
Software effort estimation accuracy is a key factor in effective planning, controlling, and delivering a successful software project within budget and schedule. The overestimation and underestimation both are the key challenges for future software development, henceforth there is a continuous need for accuracy in software effort estimation. The researchers and practitioners are striving to identify which machine learning estimation technique gives more accurate results based on evaluation measures, datasets and other relevant attributes. The authors of related research are generally not aware of previously published results of machine learning effort estimation techniques. The main aim of this study is to assist the researchers to know which machine learning technique yields the promising effort estimation accuracy prediction in software development. In this article, the performance of the machine learning ensemble and solo techniques are investigated on publicly and non-publicly domain datasets based on the two most commonly used accuracy evaluation metrics. We used the systematic literature review methodology proposed by Kitchenham and Charters. This includes searching for the most relevant papers, applying quality assessment (QA) criteria, extracting data, and drawing results. We have evaluated a state-of-the-art accuracy performance of 35 selected studies (17 ensemble, 18 solo) using mean magnitude of relative error and PRED (25) as a set of reliable accuracy metrics for performance evaluation of accuracy among two techniques to report the research questions stated in this study. We found that machine learning techniques are the most frequently implemented in the construction of ensemble effort estimation (EEE) techniques. The results of this study revealed that the EEE techniques usually yield a promising estimation accuracy than the solo techniques.  相似文献   

6.
Realistic estimation is a process by which the cost and time of software projects can be predicted. This enables management to set up attainable project objectives—that is software development organizations delivering what was promised, on time, and in budget. The main benefit is an enhancement of the professional credibility of these organizations. I have observed that some organizational aspects support deployment of software estimation while others block it. In this paper, I have defined these aspects as Driving Forces and Restraining Forces, as per Kurt Lewin's Force-Field Analysis. The purpose of this paper is to review these elements with a view to provoking new thinking.  相似文献   

7.
This paper presents the results of a comparison study of the numerical techniques of structural and aerodynamic force models developed based on the spline finite strip method with the conventional finite element approach in three-dimensional flutter analysis of cable-stayed bridges. In the new formulation, the bridge girder is modelled by spline finite strips. The mass and stiffness properties of the torsional behaviour of complex bridge girder, which have a significant influence on the wind stability of long-span bridges, are modelled accurately in the formulation. The effects of the spatial variation of the aerodynamic forces can be taken into account in the proposed numerical model by distributing the loads to the finite strips modelling the bridge deck. The numerical example of a 423 m long-span cable-stayed bridge is presented in the comparison study. The accuracy and effectiveness of the proposed finite strip model are compared to the results obtained from the equivalent beam finite element models. The advantages and disadvantages of these different modelling schemes are discussed.  相似文献   

8.
In an LTE cell, Discontinuous Reception (DRX) allows the central base station to configure User Equipments for periodic wake/sleep cycles, so as to save energy. DRX operations depend on several parameters, which can be tuned to achieve optimal performance with different traffic profiles (i.e., CBR vs. bursty, periodic vs. sporadic, etc.). This work investigates how to configure these parameters and explores the trade-off between power saving, on one side, and per-user QoS, on the other. Unlike previous work, chiefly based on analytical models neglecting key aspects of LTE, our evaluation is carried out via simulation. We use a fully-fledged packet simulator, which includes models of all the protocol stack, the applications and the relevant QoS metrics, and employ factorial analysis to assess the impact of the many simulation factors in a statistically rigorous way. This allows us to analyze a wider spectrum of scenarios, assessing the interplay of the LTE mechanisms and DRX, and to derive configuration guidelines.  相似文献   

9.
The timing predictability of Multi-Processor System on Chip (MPSoC) platforms with hard real-time applications is much more challenging than that of traditional platforms due to their large number of shared processing, communication and memory resources. Yet, this is an indispensable challenge for guaranteeing their safe usage in safety critical domains (avionics, automotive).In this article, a real-time analysis based on model-checking is proposed. The model-checking based method allows guaranteeing timing bounds of multiple Synchronous Data Flow Application (SDFA) implementations. This approach utilizes timed automata (TA) as a common semantic model to represent WCET of software components (SDF actors) and shared communication resource access protocols for buses, DMA, private local and shared memories of the MPSoC. The resulting network of TA is analyzed using the UPPAAL model-checker for providing safe timing bounds of the implementation. Furthermore, we will show the extension of our previous system model enabling single-beat inter-processor communication style beside the burst-transfer style and provide the implementation of the complete set of TA templates capturing the considered system model.We demonstrate our approach using a multi-phase electric motor control algorithm (modeled as SDFA) mapped to Infineon’s TriCore-based Aurix multicore hardware platform with both the burst and single-beat inter-processor communication styles. Our approach shows a significant precision improvement (up to a percentage improvement of 300%) compared with the worst-case bound calculation based on a pessimistic analytical upper-bound delays for every shared resource access. In addition, scalability is examined to demonstrate analysis feasibility for small parallel systems, up to 40 actors mapped to 4-tiles and up to 96 actors on a 2-tiles platforms.  相似文献   

10.
Taking the flood resources utilization in Baicheng, Jilin during 2002–2007 as the research background, and based on the entropy weight and multi-level & multi-objective fuzzy optimization theory, this research established a multi-level & semi-constructive index system and dynamic successive evaluation model for comprehensive benefit evaluation of regional flood resources utilization. With the year 2002 as the base year, the analyzing results showed that there existed a close positive correlation between flo...  相似文献   

11.
Fundamental frequency, frequency jitter, and amplitude shimmer voice algorithms were employed to measure the effects of stress in crewmember communications data in simulated AWACS mission scenarios. Two independent workload measures were used to identify levels of stress: 1) a predictor model developed by the simulation author based upon scenario-generated stimulus events, and 2) the duration of communication for each weapons director, representative of the individual's response to the induced stress. Results identified fundamental frequency and frequency jitter as statistically significant vocal indicators of stress, while amplitude shimmer showed no signs of any significant relationship with workload or stress. Consistent with previous research, the frequency algorithm was identified as the most reliable measure. However, the results did not reveal a sensitive discrimination measure between levels of stress, but rather, did distinguish between the presence or absence of stress.  相似文献   

12.
Melody Search (MS) Algorithm as an innovative improved version of Harmony Search optimization method, with a novel Alternative Improvisation Procedure (AIP) is presented in this paper. MS algorithm mimics performance processes of the group improvisation for finding the best succession of pitches within a melody. Utilizing different player memories and their interactive process, enhances the algorithm efficiency compared to the basic HS, while the possible range of variables can be varied going through the algorithm iterations. Moreover, applying the new improvisation scheme (AIP) makes algorithm more capable in optimizing shifted and rotated unimodal and multimodal problems than the basic MS.In order to demonstrate the performance of the proposed algorithm, it is successfully applied to various benchmark optimization problems. Numerical results reveal that the proposed algorithm is capable of finding better solutions when compared with well-known HS, IHS, GHS, SGHS, NGHS and basic MS algorithms. The strength of the new meta-heuristic algorithm is that the superiority of the algorithm over other compared methods increases when the dimensionality of the problem or the entire feasible range of the solution space increases.  相似文献   

13.

The majority of the world's population now resides in urban environments and information on the internal composition and dynamics of these environments is essential to enable preservation of certain standards of living. Remotely sensed data, especially the global coverage of moderate spatial resolution satellites such as Landsat, Indian Resource Satellite and Système Pour l'Observation de la Terre (SPOT), offer a highly useful data source for mapping the composition of these cities and examining their changes over time. The utility and range of applications for remotely sensed data in urban environments could be improved with a more appropriate conceptual model relating urban environments to the sampling resolutions of imaging sensors and processing routines. Hence, the aim of this work was to take the Vegetation-Impervious surface-Soil (VIS) model of urban composition and match it with the most appropriate image processing methodology to deliver information on VIS composition for urban environments. Several approaches were evaluated for mapping the urban composition of Brisbane city (south-east Queensland, Australia) using Landsat 5 Thematic Mapper data and 1:5000 aerial photographs. The methods evaluated were: image classification; interpretation of aerial photographs; and constrained linear mixture analysis. Over 900 reference sample points on four transects were extracted from the aerial photographs and used as a basis to check output of the classification and mixture analysis. Distinctive zonations of VIS related to urban composition were found in the per-pixel classification and aggregated air-photo interpretation; however, significant spectral confusion also resulted between classes. In contrast, the VIS fraction images produced from the mixture analysis enabled distinctive densities of commercial, industrial and residential zones within the city to be clearly defined, based on their relative amount of vegetation cover. The soil fraction image served as an index for areas being (re)developed. The logical match of a low (L)-resolution, spectral mixture analysis approach with the moderate spatial resolution image data, ensured the processing model matched the spectrally heterogeneous nature of the urban environments at the scale of Landsat Thematic Mapper data.  相似文献   

14.
Uncertainty and variability modelling tools greatly enhance the value of virtual prototypes at the different design stages of a CAE process. The fuzzy analysis technique is suited to deal with models containing subjective non-deterministic parameters. This technique is finding its way to different disciplines of mechanical engineering. The objective of this paper is to increase the value of this technique in early stages of mechanical design procedures. For this purpose, new numerical procedures are proposed. First, the degree of influence is introduced. This new concept measures the relative effect of highly uncertain design properties on the performance of a design. Next, this paper proposes a new reduced optimisation scheme in order to improve the computational efficiency of the interval analysis, which is at the core of the implementation of the fuzzy technique. The practical applicability of the newly developed procedures is demonstrated on two numerical applications from the automotive industry. The analysed models represent the design at the conceptual stage, and contain parameters with a high and subjective level of uncertainty. The parametrised models are used to demonstrate the value and efficiency of the developed numerical procedures: significant parameters are identified using the degree of influence analysis, the optimal configuration is identified through an interval analysis based on the reduced optimisation scheme, and finally the fuzzy technique is applied as design space exploration tool.  相似文献   

15.
This paper describes the use of an evolutionary design system known as GANNET to synthesize the structure of neural networks. Initial results are presented for two benchmark problems: the exclusive-or and the two-spirals. A variety of performance criteria and design components are used and comparisons are drawn between the performance of genetic algorithms and other related techniques on these problems.  相似文献   

16.
Uncertainty analysis of structural systems by perturbation techniques   总被引:1,自引:0,他引:1  
The formulation of an efficient method to evaluate the uncertainty of the structural response by applying perturbation techniques is described. Structural random variables are defined by their mean values, standard deviations and correlations. The uncertainty of structural behaviour is evaluated by the covariance matrix of response according to the developed perturbation methodology. It is also presented the procedure used to implement this method in a structural finite element framework. The implemented computational program allows, in only one structural analysis, to evaluate the mean value and the standard deviation of the structural response, defined in terms of displacements or forces. The proposed method is exact for problems with linear design functions and normal-distributed random variables. Results remain accurate for non-linear design functions if they can be approximated by a linear combination of the basic random variables.  相似文献   

17.
Compared with the ilmenite ore, the titanium resources in vanadium titanomagnetite (VTM) have not been effectively recovered yet due to the relatively low grade of titanium oxides. Therefore, after the extraction process of Fe and V from VTM, a large amount of titania slag is currently discarded and landfilled, leading to the environment problems. For the purpose of addressing this problem, the titania slag that is formed after pre-reduction and melt separation of VTM is used as the titanium-containing raw materials in this work. Moreover, the aluminothermic reduction is adopted and the ferrotitanium alloy is chosen as the enrichment end-product of titanium resources. The effects of additive amounts of Al, Fe2O3, and CaO on the reaction process and alloy quality are explored by the thermodynamic analysis. Furthermore, the ferrotitanium alloy that satisfies the standard requirements is successfully prepared and the grade is improved.  相似文献   

18.
In this paper, we present a novel representation of the human face for estimating the orientation of the human head in a two dimensional intensity image. The method combines the use of the much familiar eigenvalue based dissimilarity measure with image based rendering. There are two main components of the algorithm described here: the offline hierarchical image database generation and organization, and the online pose estimation stage. The synthetic images of the subject's face are automatically generated offline, for a large set of pose parameter values, using an affine coordinate based image reprojection technique. The resulting database is formally called as the IBR (or image based rendered) database. This is followed by the hierarchical organization of the database, which is driven by the eigenvalue based dissimilarity measure between any two synthetic image pair. This hierarchically organized database is a detailed, yet structured, representation of the subject's face. During the pose estimation of a subject in an image, the eigenvalue based measure is invoked again to search the synthetic (IBR) image closest to the real image. This approach provides a relatively easy first step to narrow down the search space for complex feature detection and tracking algorithms in potential applications like virtual reality and video-teleconferencing applications.  相似文献   

19.
It is well known that every Del Pezzo surface of degree 5 defined over a field k is parametrizable over k. In this paper, we give an algorithm for parametrizing, as well as algorithms for constructing examples in every isomorphism class and for deciding equivalence.  相似文献   

20.
Nabesna Glacier is one of the longest land-terminus mountain glaciers in North America. However, its flow has never been studied. We derived detailed motion patterns of Nabesna Glacier in winter and spring 1994–1996 from the synthetic aperture radar (SAR) images acquired by the European Remote Sensing satellites (ERS-1 and ERS-2) using interferometric SAR (InSAR) techniques. Special effort was made to assess the accuracy of the motion estimates, and remove data points with high uncertainty from the motion profiles, enabling us to obtain reliable glacier flow patterns along the highly curved main course of Nabesna Glacier. The results, covering 80 km of the 87 km main course of the glacier, were used to delineate four distinctive sections in terms of spatial and temporal variability of the glacier speed: (1) the upper section where glacier flow was apparently random both temporally and spatially presumably due to development of crevasses; (2) the upper-middle section with relatively steady flow around 0.27 to 0.4 m/day; (3) the middle section with a stable pattern of acceleration from 0.27–0.3 m/day to the maximum about 0.67–0.73 m/day, followed by a general deceleration to 0.17–0.33 m/day before reaching (4) the lower section where the glacier motion was generally slow yet highly variable although uncertainty in the estimation is high. Occurrence of the flow maximum, as well as many local maxima and minima at consistent locations over different periods suggests that the valley geometry affect the overall flow pattern. On top of this general trend, many small-scale temporal/spatial variations in the glacier flow patterns were observed along the glacier, especially in the lower sections. On average, the flow speeds were in the range of 0.3 to 0.7 m/day; however this lacks any measurements of summer flow speeds which are expected to be significantly higher.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号