首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   38篇
  免费   1篇
化学工业   2篇
机械仪表   1篇
轻工业   1篇
无线电   18篇
一般工业技术   4篇
冶金工业   10篇
自动化技术   3篇
  2021年   1篇
  2018年   1篇
  2016年   1篇
  2013年   2篇
  2010年   1篇
  2009年   1篇
  2007年   4篇
  2005年   3篇
  2004年   1篇
  2003年   4篇
  2002年   2篇
  2001年   1篇
  2000年   3篇
  1998年   5篇
  1997年   4篇
  1996年   2篇
  1995年   2篇
  1994年   1篇
排序方式: 共有39条查询结果,搜索用时 15 毫秒
1.
Group communication supports information transfer between a set of participants. It is becoming more and more relevant in distributed environments. For distributed or replicated data, it provides efficient communication without overloading the network. For some types of multimedia applications, it is the only way to control data transmission to group members. This paper surveys protocol functions and mechanisms for data transmission within a group, from multicast routing problems up to end-to-end multipoint transmission control. We provide a bibliography which is organized by topic  相似文献   
2.
To improve the efficiency and the quality of a service, a network operator may consider deploying a peer-to-peer architecture among controlled peers, also called here nano data centers, which contrast with the churn and resource heterogeneity of peers in uncontrolled environments. In this paper, we consider a prevalent peer-to-peer application: live video streaming. We demonstrate how nano data centers can take advantage of the self-scaling property of a peer-to-peer architecture, while significantly improving the quality of a live video streaming service, allowing smaller delays and fast channel switching. We introduce the branching architecture for nano datacenters (BAND), where a user can “pull” content from a channel of interest, or content could be “pushed” to it for relaying to other interested users. We prove that there exists an optimal trade-off point between minimizing the number of push, or the number of relaying nodes, and maintaining a robust topology as the number of channels and users get large, which allows scalability. We analyze the performance of content dissemination as users switch between channels, creating migration of nodes in the tree, while flow control insures continuity of data transmission. We prove that this p2p architecture guarantees a throughput independently of the size of the group. Analysis and evaluation of the model demonstrate that pushing content to a small number of relay nodes can have significant performance gains in throughput, start-up time, playback lags and channel switching delays.  相似文献   
3.
The search for more reliable and durable thermal barrier systems is a key factor for future aircraft turbine engines success. Hafnia is therefore an attractive ceramic component due to its similarity to zirconia and its elevated structural transformation temperatures. We report here structural and thermomechanical investigations of various plasma-sprayed coatings composed of ZrO2+x mol% HfO2 (x=0, 25, 50 and 100), partially stabilized by 4.53 mol% yttria. X-ray diffraction studies show that, a metastable, non-transformable, high yttrium content, tetragonal solid solution is the only phase observed on the as-sprayed samples. This phase is crystallographically equivalent to the t phase described for classical yttrium-partially stabilized zirconia (Y-PSZ) thermal barrier coatings (TBCs). Upon high-temperature annealing in air (T=1200C), however, the return of this t phase to equilibrium differs from the classical tt+c reaction. According to literature data, reactions of the type tt+c+m should prevail at the highest hafnia contents (x50). Indeed, important quantities of monoclinic phase are accordingly being observed upon cooling. Thermal cycling of TBC samples in air has been performed at 1100C. The Young's modulus of the ceramic coating, which progressively increases when hafnia is substituted for zirconia, has a strong influence on TBC thermomechanical resistance.  相似文献   
4.
IGP Link Weight Assignment for Operational Tier-1 Backbones   总被引:1,自引:0,他引:1  
Intradomain routing protocols, such as IS-IS or OSPF, associate a weight (or cost) with each link to compute traffic routes. Proposed methods for selecting link weights largely ignore two practical issues, that of service-level agreement (SLA) requirements and of failures. Optimizing the routing configuration, without bounding the SLA, could severely violate this requirement, which is one of the most important vehicles used by carriers to attract new customers. Since most failures are short-lived, it is much more practical not to have to change weight settings during these episodes. In this paper we propose a tabu-search heuristic for choosing link weights that takes into account both SLA requirements and link failures. Our algorithm selects link weights that still perform well, without having to be changed, even under failure events. To validate the heuristic, we develop a lower bound based on a formal integer linear program (ILP) model, and show that our heuristic solution is within 10% of the optimal ILP lower bound. We study the performance of the heuristic using two operational Tier-1 backbones. Our results illustrate two tradeoffs, between link utilization and the SLA provided, and between performances under failures versus performance without failures. We find that performance under transient failures can be dramatically improved at the expense of a small degradation during normal network operation (i.e., no failures), while simultaneously satisfying SLA requirements. We use our algorithm inside a prototype tool to conduct a case study and illustrate how systematic link weight selection can facilitate topology planning.  相似文献   
5.
Traffic engineering aims to distribute traffic so as to "optimize" some performance criterion. This optimal distribution of traffic depends on both the routing protocol and the forwarding mechanisms in use in the network. In IP networks running the OSPF or IS-IS protocols, routing is over shortest paths, and forwarding mechanisms distribute traffic "uniformly" over equal cost shortest paths. These constraints often make achieving an optimal distribution of traffic impossible. In this paper, we propose and evaluate an approach that can realize near optimal traffic distribution without changes to routing protocols and forwarding mechanisms. In addition, we explore the tradeoff that exists between performance and the configuration overhead that our solution requires. The paper's contributions are in formulating and evaluating an approach to traffic engineering in IP networks that achieves near-optimal performance while preserving the existing infrastructure.  相似文献   
6.
We study the impact of random queueing delays stemming from traffic variability on the performance of a multicast session. With a simple analytical model, we analyze the throughput degradation within a multicast (one-to-many) tree under TCP-like congestion and flow control. We use the (max,plus) formalism together with methods based on stochastic comparison (association and convex ordering) and on the theory of extremes to prove various properties of the throughput. We first prove that the throughput predicted by a deterministic model is systematically optimistic. In the presence of light-tailed random delays, we show that the throughput decreases according to the inverse of the logarithm of the number of receivers. We find analytically an upper and a lower bound for the throughput degradation. Within these bounds, we characterize the degradation which is obtained for various tree topologies. In particular, we observe that a class of trees commonly found in IP multicast sessions is significantly more sensitive to traffic variability than other topologies.  相似文献   
7.
Integrated layer processing (ILP) is an implementation concept that “permits the implementor the option of performing all the (data) manipulation steps in one or two integrated processing loops”. To estimate the achievable benefits of ILP, a file transfer application with an encryption function on top of a user-level TCP has been implemented and the performance of the application in terms of throughput and packet processing times has been measured. The results show that it is possible to obtain performance benefits by integrating marshalling, encryption, and TCP checksum calculation. The experiments yielded in a throughput gain of only 10-20% in contrast to the 50% gain achieved for simple loop experiments. Simulations of memory access and cache hit rate show that the main benefit of ILP is reduced memory access rather than an improved cache hit rate. ILP reduced the number of memory accesses up to 30% in the experiment, but the relative amount of cache misses could not be reduced compared to a carefully designed non-ILP implementation. The results also show that data manipulation characteristics may significantly influence the cache behavior and the achievable performance gain of ILP. Considering these results, ILP can only be recommended in cases where the the ILP loop consists of several, but very simple data manipulations without complex calculations over the data  相似文献   
8.
The objective of this study was to assess the performance of a new pneumatic nebuliser NL9 Atomisor. The performance was assessed in terms of particle distribution, fraction nebulised, fraction inhaled and percentage of particles of a diameter of between 1 and 4 microns for the nebulisation of physiological serum, colistin, tobramycin and amiloride. The solutions were nebulised in the approved formula for their reconstitution as used in the clinic after the addition of sodium pertechnetate. The validity of this indirect isotopic method has been shown before. The nebuliser was coupled, during the nebulisation, to a pump respirator with six settings. The fraction nebulised was defined as the percentage of the volume of the solution which had left the aerosol generator at the end of the nebulisation. The fraction inhaled was defined as a percentage of the volume of the solution which was gathered at the end of the nebulisation on a filter placed in the inspiratory circuit of the aerosol generator. The study of the distribution of aerosol particle sizes was carried out using a cascade impactor at ten stages. Each of these parameters was determined in triplicate for the four solutions studied. The nebulised fraction consisted of between 33.5 and 58.6% (mean 49.7 +/- 8.1%). The inhaled fraction consisted of between 14 and 30.4% (mean 24.5 +/- 5.5) and the duration of nebulisation was between 10 and 20 minutes. The MMAD was between 1.6 microns with tobramycin 3.5 microns with physiological serum.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   
9.
Modeling Internet backbone traffic at the flow level   总被引:4,自引:0,他引:4  
Our goal is to design a traffic model for noncongested Internet backbone links, which is simple enough to be used in network operation, while being as general as possible. The proposed solution is to model the traffic at the flow level by a Poisson shot-noise process. In our model, a flow is a generic notion that must be able to capture the characteristics of any kind of data stream. We analyze the accuracy of the model with real traffic traces collected on the Sprint Internet protocol (IP) backbone network. Despite its simplicity, our model provides a good approximation of the real traffic observed in the backbone and of its variation. Finally, we discuss the application of our model to network design and dimensioning.  相似文献   
10.
A complex medical field such as Child and Adolescent Psychiatry can only meet the expanding problems of quality management by multi-disciplinary cooperation. The quality control circle is the central element of the quality control process and of further advances to quality management. In a critical experience-report the concepts, practical issues and further questions of a continuing quality management conference at a University hospital are outlined. It concentrates on practical and process-oriented problems that concern the integration of quality management in practice and research.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号