首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
互联网技术与电信网技术研究   总被引:2,自引:1,他引:1  
互联网的高速发展,互联网的用户群体的巨大改变,互联网外部环境的巨大变化,使原有的互联网技术和互联网的设计理念已难于适应发展的需要。互联网向何处去已引起世界各国的高度关注,互联网处于发展的十字路口;互联网技术对电信网的强烈冲击使得电信界处于相当被动的局面,电信界在寻找发展方向,寻找技术的突破口,以便自己重新立于主动地位,但至今没有找到,电信网也处于发展的十字路口。电信网和互联网的发展导致下一代网技术的发展。下一代网的关键技术是什么,这些关键技术问题能不能解决,是否可能在近期得到解决并用于商用系统,都是必须重视和研究的问题。  相似文献   

2.
An automatic silicon wafer direct-bonding apparatus, a vacuum stack-bonding apparatus, a parallel polishing machine and a method for measuring a high level of parallelism of silicon wafers are described. These items are essential to wafer preparation and geometrical quality control for direct-bonding applications.Attention is then turned to silicon direct-bonded to a second compound, which presents a wealth of feasible or potential applications, such as silicon-on-silicon for a permeable-base transistor; silicon-on-insulator, prepared for special silicon integrated circuit applications. In addition, silicon bonded to metal, to interdiffusing solids, to superconductors, diamond, a ferroelectric and to polymers for various (potential) applications is discussed.  相似文献   

3.
Design of signal-adapted multidimensional lifting scheme for lossy coding   总被引:5,自引:0,他引:5  
This paper proposes a new method for the design of lifting filters to compute a multidimensional nonseparable wavelet transform. Our approach is stated in the general case, and is illustrated for the 2-D separable and for the quincunx images. Results are shown for the JPEG2000 database and for satellite images acquired on a quincunx sampling grid. The design of efficient quincunx filters is a difficult challenge which has already been addressed for specific cases. Our approach enables the design of less expensive filters adapted to the signal statistics to enhance the compression efficiency in a more general case. It is based on a two-step lifting scheme and joins the lifting theory with Wiener's optimization. The prediction step is designed in order to minimize the variance of the signal, and the update step is designed in order to minimize a reconstruction error. Application for lossy compression shows the performances of the method.  相似文献   

4.
This paper describes the design of a relational database and associated command structure for an accelerated-life-test facility. The primary goals of the design were to: (1) improve data integrity, and (2) provide a complete and consistent historical record of reliability test results. The intended reader is the engineer who wants to understand the issues involved in automating the reliability-data collection and reporting process. The paper: (1) shows (via the story of a particular reliability database design) in a tutorial fashion how to use system requirements and the formalisms of the relational model to create a database system for engineering tests; and (2) serves as a guide for developing relational database applications for other engineering and production activities. The paper makes three primary contributions by showing: (a) a practical method for constructing a relational database. The method is based on an IDEFO representation of the testing process, which is easier for the database novice to understand and implement than the usual approaches of data flow or entity-relationship diagrams. (b) The value of a particular database structure for maintaining engineering test database integrity: a table for test measurements, a table for kinds of test measurements, and a table for test measurement procedures. (c) How to write data entry/reporting application programs that maintain data integrity and ease the user's input burden. While specific issues in reliability testing are discussed, the methodology to construct the database and user interface applies broadly  相似文献   

5.
This paper uses an expression for system reliability at a repair depot to construct a nonlinear, nonpolynomial function which is amenable to numerical analysis and has a zero equal to the supportability turnaround time (STAT) for a failed unit. System reliability is in terms of the constant failure rate for all units, number of spares on-hand at the time a unit fails, and projected repair completion dates for up to four unrepaired units. In this context, STAT represents the longest repair time (for a failed unit) which assures a given reliability level; system reliability is the probability that spares are ready to replace failed units during the STAT period. The ability to calculate STAT-values is important for two reasons: (1) subtraction of the repair time for a failed unit from its STAT-value yields the latest repair start-time (for this unit) which assures a desired reliability, and (2) the earlier the latest repair start-time, the higher the priority for starting the repair of this unit. Theorems show the location of STAT with respect to the list of repair completion dates, and form the foundation of the root-finding-based algorithm for computing STAT-values. Numerical examples illustrate the algorithm  相似文献   

6.
The synchronization of variable-length codes   总被引:2,自引:0,他引:2  
Many variable-length codes exhibit a tendency for resynchronization to occur automatically following any error. However, attempts to identify an underlying synchronization mechanism, and to accurately predict the expected synchronization delay, for even quite specific variable-length codes, appear to have been largely unsuccessful. The present paper explores a novel method for estimating the synchronization performance for a wide variety of variable-length codes, based on the T-Codes. T-Codes are a class of self-synchronizing codes, which typically synchronize within 2-3 codewords by a mechanism that derives from a recursive T-augmentation construction. It is observed that the T-Code mechanism for synchronization is followed, more or less, by other variable-length codes wherever substantial numbers of codewords are shared with a T-Code set. T-augmentation itself provides a means for assessing the contribution individual codewords make to the overall synchronization process for a T-Code set. Thus codeword differences between sets may be specifically evaluated to estimate the synchronization performance of a variable-length code set from a closely related T-Code set  相似文献   

7.
It is shown how a simple matrix algebra procedure can be used to induce Schur-type algorithms for the solution of certain Toeplitz and Hankel linear systems of equations when given Levinson-Durbin algorithms for such problems. The algorithm of P. Delsarte et al. (1985) for Hermitian Toeplitz matrices in the singular case is used to induce a Schur algorithm for such matrices. An algorithm due to G. Heinig and K. Rost (1984) for Hankel matrices in the singular case is used to induce a Schur algorithm for such matrices. The Berlekamp-Massey algorithm is viewed as a kind of Levinson-Durbin algorithm and so is used to induce a Schur algorithm for the minimal partial realization problem. The Schur algorithm for Hermitian Toeplitz matrices in the singular case is shown to be amenable to implementation on a linearly connected parallel processor array of the sort considered by Kung and Hu (1983), and in fact generalizes their result to the singular case  相似文献   

8.
The goal of this paper is two-fold. First, to establish a tractable model for the underwater acoustic channel useful for network optimization in terms of convexity. Second, to propose a network coding based lower bound for transmission power in underwater acoustic networks, and compare this bound to the performance of several network layer schemes. The underwater acoustic channel is characterized by a path loss that depends strongly on transmission distance and signal frequency. The exact relationship among power, transmission band, distance and capacity for the Gaussian noise scenario is a complicated one. We provide a closed-form approximate model for 1) transmission power and 2) optimal frequency band to use, as functions of distance and capacity. The model is obtained through numerical evaluation of analytical results that take into account physical models of acoustic propagation loss and ambient noise. Network coding is applied to determine a lower bound to transmission power for a multicast scenario, for a variety of multicast data rates and transmission distances of interest for practical systems, exploiting physical properties of the underwater acoustic channel. The results quantify the performance gap in transmission power between a variety of routing and network coding schemes and the network coding based lower bound. We illustrate results numerically for different network scenarios.  相似文献   

9.
Mesh generation in finite-element- (FE) method-based electroencephalography (EEG) source analysis generally influences greatly the accuracy of the results. It is thus important to determine a meshing strategy well adopted to achieve both acceptable accuracy for potential distributions and reasonable computation times and memory usage. In this paper, we propose to achieve this goal by smoothing regular hexahedral finite elements at material interfaces using a node-shift approach. We first present the underlying theory for two different techniques for modeling a current dipole in FE volume conductors, a subtraction and a direct potential method. We then evaluate regular and smoothed elements in a four-layer sphere model for both potential approaches and compare their accuracy. We finally compute and visualize potential distributions for a tangentially and a radially oriented source in the somatosensory cortex in regular and geometry-adapted three-compartment hexahedra FE volume conductor models of the human head using both the subtraction and the direct potential method. On the average, node-shifting reduces both topography and magnitude errors by more than a factor of 2 for tangential and 1.5 for radial sources for both potential approaches. Nevertheless, node-shifting has to be carried out with caution for sources located within or close to irregular hexahedra, because especially for the subtraction method extreme deformations might lead to larger overall errors. With regard to realistic volume conductor modeling, node-shifted hexahedra should thus be used for the skin and skull compartments while we would not recommend deforming elements at the grey and white matter surfaces.  相似文献   

10.
Scheduling algorithms play an important role for TDMA-based wireless sensor networks. Existing TDMA scheduling algorithms address a multitude of objectives. However, their adaptation to the dynamics of a realistic wireless sensor network has not been investigated in a satisfactory manner. This is a key issue considering the challenges within industrial applications for wireless sensor networks, given the time-constraints and harsh environments. In response to those challenges, we present SAS-TDMA, a source-aware scheduling algorithm. It is a cross-layer solution which adapts itself to network dynamics. It realizes a trade-off between scheduling length and its configurational overhead incurred by rapid responses to routes changes. We implemented a TDMA stack instead of the default CSMA stack and introduced a cross-layer for scheduling in TOSSIM, the TinyOS simulator. Numerical results show that SAS-TDMA improves the quality of service for the entire network. It achieves significant improvements for realistic dynamic wireless sensor networks when compared to existing scheduling algorithms with the aim to minimize latency for real-time communication.  相似文献   

11.
The atomic force microscope (AFM) system has evolved into a useful tool for direct measurements of intermolecular forces with atomic-resolution characterization that can be employed in a broad spectrum of applications. The distance between cantilever tip and sample surface in non-contact AFM is a time-varying parameter even for a fixed sample height, and typically difficult to identify. A remedy to this problem is to directly identify the sample height in order to generate high-precision atomic-resolution images. For this, the microcantilever (which forms the basis for the operation of AFM) is modeled as a single mode approximation and the interaction between the sample and cantilever is derived from a van der Waals potential. Since in most practical applications only the microcantilever deflection is accessible, we will use merely this measurement to identify the sample height. In most non-contact AFMs, cantilevers with high-quality factors are employed essentially for acquiring high-resolution images. However, due to high-quality factor, the settling time is relatively large and the required time to achieve a periodic motion is long. As a result, identification methods based on amplitude and phase measurements cannot be efficiently utilized. The proposed method overcomes this shortfall by using a small fraction of the transient motion for parameter identification, so the scanning speed can be increased significantly. Furthermore, for acquiring atomic-scale images of atomically flat samples, the need for feedback loop to achieve setpoint amplitude is basically eliminated. On the other hand, for acquiring atomic-scale images of highly uneven samples, a simple PI controller is designed to track the desired constant sample height. Simulation results are provided to demonstrate the feasibility of the approach for both sample height identification and tracking the desired sample height.  相似文献   

12.
Trapped energy resonators and transducers have attained a considerable importance in quartz crystal technology both as single-frequency resonators for the control of crystal oscillators and as drivers for the monolithic crystal filter which appears likely to have a wide use as a channel filter for the separation of voice frequency channels for long-distance carrier systems, microwave radio, and submarine cable systems. It is the purpose of this paper to derive the equations for trapped energy resonators of the thickness-shear and thickness-twist types and to calculate the ratios of capacitances for straight crested waves. It turns out that the ratios are lower (coupling higher) than are observed in practice. It appears that this difference is connected with the finite width of the plate which causes the motion at the edge of the plate to be somewhat smaller than the motion in the center of the plate. While no exact solution has been obtained for the finite plate, an approximation is made which is in good agreement with the experiment. The resonator on a plate is a symmetrical device, whereas a transducer for driving a monolithic filter is a dissymmetrical device since it is driving different impedances on its two boundaries. To represent this dissymmetry requires a distributed network representation which is somewhat similar to that found for a plane longitudinal or shear wave except that the propagation constant for a trapped wave replaces that of the plane wave. The representation also requires a transformer whose transformation ratio is a function of the frequency and two negative element terms. By transformations the negative elements can be made to disappear. These together produce an equivalent circuit whose values depend on the ratio of the electrode length to the crystal thickness.  相似文献   

13.
Obtaining a good maintenance strategy for a standby system is discussed. The problem is analyzed via decision theory to determine the waiting time to call the repair facility (for a two-unit standby system) when the first piece of equipment fails. Previous research into this kind of system is briefly described, and a need for constructing a decision model is explained. The uncertainty of the parameters is accounted for in a Bayes approach in order to consider expert prior knowledge. The failure and repair rates are assumed to be constant and are elicited from an expert's prior beliefs. When no data are available, expert guesses are used. A method is presented for solving the conflicting requirements of system availability and cost through a multiattribute utility function which can express cardinal values for the decision maker's preferences over the objective variables. The decision model derives the appropriate maintenance strategy; it corresponds to a set of actions, procedures, and resources, giving a consequent waiting time before calling the repair facility. The use of the model is demonstrated for a telecommunication system  相似文献   

14.
In this paper, orthogonal frequency division multiplexing (OFDM) for time-based range estimation (TBRE) in a separable multipath channel is investigated and analyzed with respect to its accuracy. First, the Cramer–Rao lower bound (CRLB) in a separable multipath channel is theoretically derived, and indicates a similar expression to that for a single path channel. The CRLB for non-data-bearing (NDB) OFDM transmission is compared to that for pseudo-noise (PN) transmission, demonstrating a large performance gap in favor of the NDB OFDM. Furthermore, the maximum likelihood estimator (MLE) for TBRE in a separable multipath channel is theoretically derived, also demonstrating a similar expression to that in a single path channel, except that several peaks instead of one peak are expected in a separable multipath channel corresponding to all arrival paths. The MLE for TBRE is then compared to the commonly used MLE for channel estimation, showing an equal performance in terms of mean square error when using an NDB OFDM transmission. Simulation results demonstrate a good agreement with our proposed theory.  相似文献   

15.
A technique for the analysis and design of noniterative algorithms for discrete-time, band-limited signal extrapolation is described. The approach involves modeling the extrapolation process as a linear, time-varying (LTV) system, or filter. Together with a previously developed Fourier theory for LTV systems, this model provides a frequency-domain transfer function representation for the extrapolation system. This representation serves as a powerful tool for characterizing and comparing the reconstruction properties of several well-known least squares optimal algorithms for band-limited extrapolation. Moreover, the frequency-domain setting provides a conceptually attractive means for understanding the process of extrapolation itself. Additionally, a least squares approximation methodology for designing LTV filters for band-limited extrapolation is developed. The design technique is shown to unify a broad class of algorithms for extrapolating discrete-time data and, further, to provide a means for designing new and improved extrapolation algorithms  相似文献   

16.
In this paper, we define a class of generalized guaranteed rate (GR) scheduling algorithms that includes algorithms which allocate a variable rate to the packets of a flow. We define work-conserving generalized virtual clock, packet-by-packet generalized processor sharing, and self-clocked fair queueing scheduling algorithms that can allocate a variable rate to the packets of a flow. We also define scheduling algorithms suitable for servers where packet fragmentation may occur. We demonstrate that if a class of rate controllers is employed for a flow in conjunction with any scheduling algorithm in GR, then the resulting non-work-conserving algorithm also belongs to GR. This leads to the definition of several non-work-conserving algorithms. We then present a method for deriving the delay guarantee of a network of servers when: (1) different rates are allocated to packets of a flow at different servers along the path and the bottleneck server for each packet may be different, and (2) packet fragmentation and/or reassembly may occur. This delay guarantee enables a network to provide various service guarantees to flows conforming to any specification. We illustrate this by utilizing delay guarantee to derive delay bounds for flows conforming to leaky bucket, exponentially bounded burstiness, and flow specification. Our method for determining these bounds is valid in internetworks and leads to tighter results  相似文献   

17.
We determine analytic expressions for the performance of some low-complexity combined source-channel coding systems. The main tool used is the Hadamard transform. In particular, we obtain formulas for the average distortion of binary lattice vector quantization with affine index assignments, linear block channel coding, and a binary-symmetric channel. The distortion formulas are specialized to nonredundant channel codes for a binary-symmetric channel, and then extended to affine index assignments on a binary-asymmetric channel. Various structured index assignments are compared. Our analytic formulas provide a computationally efficient method for determining the performance of various coding schemes. One interesting result shown is that for a uniform source and uniform quantizer, the natural binary code is never optimal for a nonsymmetric channel, even though it is known to be optimal for a symmetric channel  相似文献   

18.
The standard hospital room interface for control of communication and entertainment devices assumes a patient has the ability to hold and press mechanical switches. If the patient does not have these abilities, then the patient must wait for a nurse to walk by the room to ask for help. Mobile devices using the Android and iOS operating systems are compared to accommodate the limitations of a patient for communication and control of the hospital room environment. In order to design an appropriate system, analysis of currently available off-the-shelf technology is performed to find the appropriate configuration that is compatible with the hospital room environment and covers a clear patient need. Evaluation of components for a fully integrated system is shown. Strengths and weaknesses of each technology are discussed. Progress toward an integrated solution on a tablet is provided in the conclusion.  相似文献   

19.
内存泄漏是软件开发中经常遇到的问题。在使用C/C 开发的大型软件中,内存泄漏往往很难发现。现介绍了一种基于动态代码插装技术的内存泄漏检测器的实现方法。该方法在实践中证明简单易用,对软件运行期内的影响较低。  相似文献   

20.
Throughput-range tradeoff of wireless mesh backhaul networks   总被引:3,自引:0,他引:3  
Wireless backhaul communication is expected to play a significant role in providing the necessary backhaul resources for future high-rate wireless networks. Mesh networking, in which information is routed from source to destination over multiple wireless links, has potential advantages over traditional single-hop networking, especially for backhaul communication. We develop a linear programming framework for determining optimum routing and scheduling of flows that maximizes throughput in a wireless mesh network and accounts for the effect of interference and variable-rate transmission. We then apply this framework to examine the throughput and range capabilities for providing wireless backhaul to a hexagonal grid of base stations, for both single-hop and multihop transmissions for various network scenarios. We then discuss the application of mesh networking for load balancing of wired backhaul traffic under unequal access traffic conditions. Numerical results show a significant benefit for mesh networking under unbalanced loading.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号