首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
In Very Long Baseline Interferometry, signals from far radio sources are simultaneously recorded at different antennas, with the purpose of investigating their physical properties. The recorded signals are generally modeled as realizations of Gaussian processes, whose power is dominated by the system noise at the receiving antennas. The actual signal coming from the radio source can be detected only after cross-correlation of the various data-streams. The signals received at each antenna are digitized after low noise amplification and frequency down-conversion, in order to allow subsequent digital post-processing. The applied quantization is coarse, 1 or 2 bits being generally associated to the signal amplitude. In modern applications the sampling is typically performed at a high rate, and subchannels are then generated by filtering, followed by decimation and requantization of the signal streams. The redigitized streams are then cross-correlated to extract the physical observables. While the classical effect of quantization has widely been studied in the past, the decorrelation induced by the filtering and requantization process is still characterized experimentally, mainly due to its inherent mathematical complexity. In the present work we analyze the above problem, and provide algorithms and analytical formulas aimed at predicting the induced decorrelation for a wide class of quantization schemes, with the unique assumption of weakly correlated signals, typically fulfilled in VLBI and radio astronomy applications.  相似文献   

2.
With the advent of Internet services, big data and cloud computing, high-throughput computing has generated much research interest, especially on high-throughput cloud servers. However, three basic questions are still not satisfactorily answered: (1) What are the basic metrics (what throughput and high-throughput of what)? (2) What are the main factors most beneficial to increasing throughput? (3) Are there any fundamental constraints and how high can the throughput go? This article addresses these issues by utilizing the fifty-year progress in Little??s law, to reveal three fundamental relations among the seven basic quantities of throughput (??), number of active threads (L), waiting time (W), system power (P), thread energy (E), Watts per thread ??, threads per Joule ??. In addition to Little??s law L = ??W, we obtain P = ??E and ?? = L???, under reasonable assumptions. These equations help give a first order estimation of performance and power consumption targets for billion-thread cloud servers.  相似文献   

3.
The snow water equivalent (SWE) for the Red River basin of North Dakota and Minnesota was retrieved from data acquired by passive microwave SSM/I (Special Sensor Microwave Imager) sensors mounted on the US Defense Meteorological Satellite Program (DMSP) satellites, physiographic and atmospheric data by an artificial neural network called Modified Counter Propagation Network (MCPN), a Projection Pursuit Regression (PPR) and a nonlinear regression. The airborne gamma-ray measurements of SWE for 1989 and 1997 were used as observed SWE, and SSM/I data of 19 and 37 GHz frequencies, in both horizontal and vertical polarization, were used for the calibration (1989 data from DMSP-F8) and validation (1997 data from DMSP-F10 and F13 of both ascending and descending overpass times were combined) of the models. The SSM/I data were screened for the presence of wet snow, large water bodies like lakes and rivers, and depth-hoar. The MCPN model produced encouraging results in both calibration and validation stages (R2 was about 0.9 for both calibration (C) and validation (V)), better than PPR (R2 was 0.86 for C and 0.62 for V), which in turn was better than the multivariate nonlinear regression at the calibration stage (R2 was 0.78 for C and 0.71 for V). MCPN is probably better than the linear and nonlinear regression counterparts because of its parallel computing structure resulted from neurons interconnected by a parallel network and its ability to learn and generalize information from complex relationships such as the SWE-SSM/I or other relationships encountered in geosciences.  相似文献   

4.
5.
Basel II imposes regulatory capital on banks related to the default risk of their credit portfolio. Banks using an internal rating approach compute the regulatory capital from pooled probabilities of default. These pooled probabilities can be calculated by clustering credit borrowers into different buckets and computing the mean PD for each bucket. The clustering problem can become very complex when Basel II regulations and real-world constraints are taken into account. Search heuristics have already proven remarkable performance in tackling this problem. A Threshold Accepting algorithm is proposed, which exploits the inherent discrete nature of the clustering problem. This algorithm is found to outperform alternative methodologies already proposed in the literature, such as standard k-means and Differential Evolution. Besides considering several clustering objectives for a given number of buckets, we extend the analysis further by introducing new methods to determine the optimal number of buckets in which to cluster banks’ clients.  相似文献   

6.
Vector fields are a common concept for the representation of many different kinds of flow phenomena in science and engineering. Methods based on vector field topology are known for their convenience for visualizing and analysing steady flows, but a counterpart for unsteady flows is still missing. However, a lot of good and relevant work aiming at such a solution is available. We give an overview of previous research leading towards topology‐based and topology‐inspired visualization of unsteady flow, pointing out the different approaches and methodologies involved as well as their relation to each other, taking classical (i.e. steady) vector field topology as our starting point. Particularly, we focus on Lagrangian methods, space–time domain approaches, local methods and stochastic and multifield approaches. Furthermore, we illustrate our review with practical examples for the different approaches.  相似文献   

7.
This paper addresses the problem of constructing reliable interval predictors directly from observed data. Differently from standard predictor models, interval predictors return a prediction interval as opposed to a single prediction value. We show that, in a stationary and independent observations framework, the reliability of the model (that is, the probability that the future system output falls in the predicted interval) is guaranteed a priori by an explicit and non-asymptotic formula, with no further assumptions on the structure of the unknown mechanism that generates the data. This fact stems from a key result derived in this paper, which relates, at a fundamental level, the reliability of the model to its complexity and to the amount of available information (number of observed data).  相似文献   

8.
With the continuous increase of data, scaling up to unprecedented amounts, generated by Internet-based systems, Big Data has emerged as a new research field, coined as “Big Data Science”. The core of Big Data Science is the extraction of knowledge from data as a basis for intelligent services and decision making systems, however, it encompasses many research topics and investigates a variety of techniques and theories from different fields, including data mining and machine learning, information retrieval, analytics, and indexing services, massive processing and high performance computing. Altogether the aim is the development of advanced data-aware knowledge based systems.This special issue presents advances in Semantics, Intelligent Processing and Services for Big Data and their applications to a variety of domains including mobile computing, smart cities, forensics and medicine.  相似文献   

9.
Despite one and a half decade of research and an impressive body of knowledge on how to represent and process musical audio signals, the discipline of Music Information Retrieval still does not enjoy broad recognition outside of computer science. In music cognition and neuroscience in particular, where MIR’s contribution could be most needed, MIR technologies are scarcely ever utilized—when they’re not simply brushed aside as irrelevant. This, we contend here, is the result of a series of misunderstandings between the two fields, about deeply different methodologies and assumptions that are not often made explicit. A collaboration between a MIR researcher and a music psychologist, this article attempts to clarify some of these assumptions, and offers some suggestions on how to adapt some of MIR’s most emblematic signal processing paradigms, evaluation procedures and application scenarios to the new challenges brought forth by the natural sciences of music.  相似文献   

10.
This article presents new feedback actuators that achieve accurate position control of a flexible gantry robot arm. Translational motion in the plane is generated by two dc motors and controlled by applying electric fields to electro‐rheological (ER) clutch actuators. On the other hand, during control action of translational motion, a flexible arm attached to the moving part produces undesirable oscillations due to its inherent flexibility. Oscillations are actively suppressed by employing feedback voltage to the piezoceramic actuator attached to the surface of the flexible arm. Consequently, an accurate position control at the end‐point of the flexible arm can be achieved. To accomplish this control goal, governing equations of the proposed system are derived and written as transfer functions. Transfer functions are used in design of a set of robust H controllers. Electric fields to be applied to ER clutch and control voltage for the piezoceramic actuator are determined via H methodology which is incorporated with classical loop shaping design technique. To evaluate effectiveness of the proposed control system, experiments for both regulating and tracking controls are undertaken. ©1999 John Wiley & Sons, Inc.  相似文献   

11.
Modern GPUs (Graphics Processing Units) offer very high computing power at relative low cost. To take advantage of their computing resources and develop efficient implementations is essential to have certain knowledge about the architecture and memory hierarchy. In this paper, we use the FFT (Fast Fourier Transform) as a benchmark tool to analyze different aspects of GPU architectures, like the influence of the memory access pattern or the impact of the register pressure. The FFT is a good tool for performance analysis because it is used in many digital signal processing applications and has a good balance between computational cost and memory bandwidth requirements.  相似文献   

12.
In the classical synthesis problem, we are given a specification ψ over sets of input and output signals, and we synthesize a finite-state transducer that realizes ψ: with every sequence of input signals, the transducer associates a sequence of output signals so that the generated computation satisfies ψ. In recent years, researchers consider extensions of the classical Boolean setting to a multi-valued one. We study a multi-valued setting in which the truth values of the input and output signals are taken from a finite lattice, and so is the satisfaction value of specifications. We consider specifications in latticed linear temporal logic (LLTL). In LLTL, conjunctions and disjunctions correspond to the meet and join operators of the lattice, respectively, and the satisfaction values of formulas are taken from the lattice too. The lattice setting arises in practice, for example in specifications involving priorities or in systems with inconsistent viewpoints. We solve the LLTL synthesis problem, where the goal is to synthesize a transducer that realizes the given specification in a desired satisfaction value. For the classical synthesis problem, researchers have studied a setting with incomplete information, where the truth values of some of the input signals are hidden and the transducer should nevertheless realize ψ. For the multi-valued setting, we introduce and study a new type of incomplete information, where the truth values of some of the input signals may be noisy, and the transducer should still realize ψ in the desired satisfaction value. We study the problem of noisy LLTL synthesis, as well as the theoretical aspects of the setting, like the amount of noise a transducer may tolerate, or the effect of perturbing input signals on the satisfaction value of a specification. We prove that the noisy-synthesis problem for LLTL is 2EXPTIME-complete, as is traditional LTL synthesis.  相似文献   

13.
Recent development in Graphics Processing Units (GPUs) has enabled inexpensive high performance computing for general-purpose applications. Compute Unified Device Architecture (CUDA) programming model provides the programmers adequate C language like APIs to better exploit the parallel power of the GPU. Data mining is widely used and has significant applications in various domains. However, current data mining toolkits cannot meet the requirement of applications with large-scale databases in terms of speed. In this paper, we propose three techniques to speedup fundamental problems in data mining algorithms on the CUDA platform: scalable thread scheduling scheme for irregular pattern, parallel distributed top-k scheme, and parallel high dimension reduction scheme. They play a key role in our CUDA-based implementation of three representative data mining algorithms, CU-Apriori, CU-KNN, and CU-K-means. These parallel implementations outperform the other state-of-the-art implementations significantly on a HP xw8600 workstation with a Tesla C1060 GPU and a Core-quad Intel Xeon CPU. Our results have shown that GPU + CUDA parallel architecture is feasible and promising for data mining applications.  相似文献   

14.
基于Web服务的图像处理系统研究与实现   总被引:1,自引:0,他引:1  
网格技术的发展,特别是网络服务资源框架WSRF的推出,为以服务形式实现大型分布式应用提供了有力支持.本文结合基于Web服务构建的图像处理系统WIP,讨论实现网格应用系统的相关技术.WIP采用多层应用模式分解系统功能,任务调度通过UDDI注册中心分布式调用服务,利用网格环境中空闲计算资源处理图像服务,提高图像计算速度.WIP是基于医学图像构建的,该平台易于扩展到其它应用领域.  相似文献   

15.
In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass–spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations.  相似文献   

16.
New parallel objective function determination methods for the job shop scheduling problem are proposed in this paper, considering makespan and the sum of jobs execution times criteria, however, the methods proposed can be applied also to another popular objective functions such as jobs tardiness or flow time. Parallel Random Access Machine (PRAM) model is applied for the theoretical analysis of algorithm efficiency. The methods need a fine-grained parallelization, therefore the approach proposed is especially devoted to parallel computing systems with fast shared memory (e.g. GPGPU, General-Purpose computing on Graphics Processing Units).  相似文献   

17.
We study a new variant of the classical 20 question game with lies (a.k.a. Ulam-Rényi game). The Ulam-Rényi game models the problem of identifying an initially unknown m-bit number by asking subset questions, where up to e of the answers might be mendacious. In the variant considered in this paper, we set an additional constraint on the type of questions, namely, that the subsets they ask for should be the union of at most k intervals for some k>0 fixed beforehand. We show that for any e and m, there exists k only depending on e such that strategies using k-interval question are as powerful (in terms of the minimum number of queries needed) as the best strategies using arbitrary membership questions.  相似文献   

18.
Based on a recent work by Abraham, Bartal and Neiman (2007), we construct a strictly fundamental cycle basis of length O(n2) for any unweighted graph, whence proving the conjecture of Deo et al. (1982).For weighted graphs, we construct cycle bases of length O(W⋅lognloglogn), where W denotes the sum of the weights of the edges. This improves the upper bound that follows from the result of Elkin et al. (2005) by a logarithmic factor and, for comparison from below, some natural classes of large girth graphs are known to exhibit minimum cycle bases of length Ω(W⋅logn).We achieve this bound for weighted graphs by not restricting ourselves to strictly fundamental cycle bases—as it is inherent to the approach of Elkin et al.—but rather also considering weakly fundamental cycle bases in our construction. This way we profit from some nice properties of Hierarchically Well-Separated Trees that were introduced by Bartal (1998).  相似文献   

19.
Unlike the connected sum in classical topology, its digital version is shown to have some intrinsic feature. In this paper, we study both the digital fundamental group and the Euler characteristic of a connected sum of digital closed ki-surfaces, i∈{0,1}.  相似文献   

20.
This paper presents a differential optical flow method which accounts for two typical motion-estimation problems: (1) flow regularization within regions of uniform motion while (2) preserving sharp edges near motion discontinuities i.e., where motion is multimodal by nature. The method proposed is a modified version of the well known Lucas–Kanade (LK) algorithm. While many edge-preserving strategies try to minimize the effect of outliers by using a line process or a robust function, our method takes a novel approach to solve the problem. Based on documented assumptions, our method computes motion with a classical least-squares fit on a local neighborhood shifted away from where motion is likely to be multimodal. In this way, the inherent bias due to multiple motion around moving edges is avoided instead of being compensated. This edge-avoidance procedure is based on the non-parametric mean-shift algorithm which shifts the LK integration window away from local sharp edges. Our method also locally regularizes motion by performing a fusion of local motion estimates. The regularization is made with a covariance filter which minimizes the effect of uncertainties due in part to noise and/or lack of texture. Our method is compared with other edge-preserving methods on image sequences representing different challenges.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号