首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The relationship between the occurrence of simulator sickness (SS) and varying latency in a helmet-mounted display (HMD) was explored in this study. Previous work has always considered latency to be a constant. The aim of this research was to determine if a latency that varied over time would impact the experience of SS for HMD users. An object location task was used while viewing real, live video scenes via HMD. A planned comparisons approach was utilized with four experimental conditions, 2 of them having constant latency (0 ms added to system baseline, 200 ms added to system baseline) and 2 of them having sinusoidally varying latency (100 ms amplitude at 0.2 Hz frequency, and 20–100 ms varying amplitude at 0.2 Hz frequency). These conditions allowed for the assessment of the effects of constant latency vs. varying latency on the experience of SS. The results indicated that a varying latency is associated with greater experience of SS among HMD users than constant latency. Results also indicated, as has other recent research, that added constant latency on its own does not appear to be associated with the experience of higher levels of SS in an HMD.  相似文献   

2.
《Computer Communications》2002,25(11-12):1103-1112
IP over ATM technique is gaining popularity due to the Quality of Service (QoS) and other management features offered by ATM and its ease of integration with almost all existing technologies. This paper deals with the granularity of implementation of ATM Adaptation Layer 5 (AAL 5) protocol stack. The focus of this study is to propose an AAL 5 implementation that would outperform the existing implementation schemes. This paper also makes an effort to analyze the performance of the proposed implementation and validate the simulation results using analytic modeling. We also experimented with the simulation model to find out the best range of packet size and burst size for the AAL 5 stack. It was found that in order to achieve a good performance, the minimum recommended packet size should be 0.5 KB. This is a reasonable packet size for IP over ATM. We can have higher packet sizes in order to obtain better efficiency, however, this may cause for problems with memory requirements. Therefore, the upper limit to packet size for efficient and practical implementation should be 10 KB. Finally, it is very important on the segmentation side for the memory to be of fixed latency. If the Segmentation and Reassembly (SAR) interface to memory is not of fixed latency, further cycle loss could be introduced.  相似文献   

3.
With the proliferation of video contents widely distributed over the Internet and the progress of video coding (e.g., H.264/AVC) and transmission technologies, more challenges need to overcome in order to meet the requirements of all users with diverse terminals. Video streaming over IP and wireless becomes a popular issue since the new century.However, there is little work concerning the quantitative analysis on the objective quality of streaming videos. Thus a strict definition of the objective quality and quality variation of scalable video coding (SVC) is required, in order to efficiently transmit video contents over Internet and wireless and reach an attainable subjective quality perception for end-users. Since FGS (fine granularity scalability) video is coded in bit planes, its enhancing layer can be truncated arbitrarily, as a case study of scalable video coding, an objective quality definition for FGS-coded video is introduced in this paper, based on MSE (mean square error) and PSNR (peak signal-to-noise ratio). This definition can also be generalized to any layered scalable coding videos, such as the traditional layered videos in BL + ELs (base layer + enhancing layers) formats or H.264/AVC in BL + CGS (coarse granularity scalability) + FGS structures, and it can be applied to design optimal algorithms for video streaming. Furthermore, It can also be taken as a measure to assess the subjective quality of streaming videos, by incorporating user preferences and terminal capacities.According to this definition, a quality optimal problem of scene in video segments is formulated and solved using the state transfer graph and dynamic programming. The optimal transmission policy is also obtained and compared with a real-time transmission algorithm. Different aggregation levels (segmentation granularity) of video segments for optimal transmission are also examined by experimental data. Simulation results validate our observations.  相似文献   

4.
This paper presents the simulation study of “Host based autonomous Mobile Address Translation” using NCTUns simulator. It is a network layer, end-to-end vertical handover solution, based upon modification of “Mobile IP with address Translation”. Vertical handover approaches generally require new network elements, a new layer in TCP/IP stack, or fixing a protocol at a particular layer. To enhance handover experience, recent approaches focus on reducing signalling, localizing the registration, creating hierarchies, using proxy, preparing handover in advance, predicting target network, or exploiting multicasting and path extension techniques. These approaches, however, demand change in the network infrastructure to support mobility and limit the scope of mobility. Despite end-to-end signalling, the Host based autonomous Mobile Address Translation scheme ensures minimum service disruption and distinctly allows global mobility of the mobile node without requiring any modification in the network. We have simulated the mobility of a multi-interface mobile node in a heterogeneous network environment composed of WiFi (IEEE802.11a, IEEE802.11b) and WiMAX (IEEE802.16e) access networks. Performance of the scheme is evaluated taking into account wide range of end-to-end delays between mobile node and the correspondent node, various speeds of the mobile node and different packet loss rates of the network. Based on our detailed simulation study, it has been observed that this scheme offers reduced service disruption time, packet loss and packet latency. The service disruption time is found to be significantly low (typically in the range of 10 ms) compared to that of Mobile IP (which is in the order of 100 ms); this makes this new scheme perfectly suitable for real time applications. Low service disruption time consequently reduces the packet loss by manyfold and the packet latency remains unaffected during and after handover due to translation of address at the source. The results suggest that this protocol is a viable vertical handover solution due to its simplicity, scalability, low overhead and ready deployability.  相似文献   

5.
In addition to an unprecedented 600 Mb/s physical data rate in upcoming standards with greater than 1 Gb/s being considered for future systems, 802.11 has evolved from its earlier incarnations to become a ubiquitous, high-throughput wireless access technology utilized in heterogeneous networks. Although capable of advanced QoS provisioning, commercial 802.11 implementations often support only a subset of QoS specifications, utilize manufacturer-specific QoS enhancements, or use sub-optimal MAC reference specifications with limited capability in achieving optimal system throughput and QoS provisioning.Although more efficient ARQ modes have been defined in the 802.11e and imminent 802.11n amendment standards, opportunities exist within this framework for further optimization through dynamic adaptation of key ARQ-related parameters, which is out of scope of current 802.11 standard specifications. Considering such opportunities, this article presents a novel adaptive ARQ scheme designed to improve the quality and reliability of multimedia transmission through the real-time adaptation of the maximum packet burst size and actual ARQ mode employed. Comprehensive simulation studies show that this scheme can potentially improve the QoS and throughput performance of multimedia traffic in both existing and future 802.11 wireless LANs.  相似文献   

6.
In this paper, the energy consumption of high speed access services up to 1 Gb/s per customer is estimated for different passive optical network (PON) technologies. While other studies on PON power consumption typically assume a fixed split ratio, we also consider a greenfield approach, where the split ratio can be optimized for each technology, taking full advantage of its capacity and reach. The split ratio optimization takes into account Quality of Service (QoS) in terms of bandwidth availability and packet loss for triple-play services (voice, television and Internet). This paper includes an in-depth discussion of our split ratio dimensioning approach and our power consumption model for an optical access network in a major city. The obtained results show that statistical gain provided by dynamic bandwidth allocation as well as power splitting ratio optimization in PONs are key factors for achieving energy efficiency. For access rates up to 900 Mb/s, XG-PON1 turns out to be the most energy efficient option. For higher access rates up to 1 Gb/s, the optimal technology depends on split ratio restrictions. If an existing optical distribution network (ODN) with split ratio 1:64 is used, XG-PON1 remains the most energy efficient technology. If higher split ratios up to 1:256 can be achieved, TWDM PON becomes the most energy efficient solution for access rates up to 1 Gb/s.  相似文献   

7.
《Computer Networks》2008,52(7):1343-1364
We present the implementation of a large-scale latency estimation system based on GNP and incorporated into the Google content delivery network. Our implementation employs standard features of contemporary Web clients, and carefully controls the overhead incurred by latency measurements using a scalable centralized scheduler. It also requires only a small number of CDN modifications, which makes it attractive for any CDN interested in large-scale latency estimation.We investigate the issue of coordinate stability over time and show that coordinates drift away from their initial values with time, so that 25% of node coordinates become inaccurate by more than 33 ms after one week. However, daily re-computations make 75% of the coordinates stay within 6 ms of their initial values. Furthermore, we demonstrate that using coordinates to decide on client-to-replica re-direction leads to selecting replicas closest in term of measured latency in 86% of all cases. In another 10% of all cases, clients are re-directed to replicas offering latencies that are at most two times longer than optimal. Finally, collecting a huge volume of latency data and using clustering techniques enable us to estimate latencies between globally distributed Internet hosts that have not participated in our measurements at all. The results are sufficiently promising that Google may offer a public interface to the latency estimates in the future.  相似文献   

8.
Motion is a key feature for a wide class of computer vision approaches to recognize actions. In this article, we show how to define bio-inspired features for action recognition. To do so, we start from a well-established bio-inspired motion model of cortical areas V1 and MT. The primary visual cortex, designated as V1, is the first cortical area encountered in the visual stream processing and early responses of V1 cells consist in tiled sets of selective spatiotemporal filters. The second cortical area of interest in this article is area MT where MT cells pool incoming information from V1 according to the shape and characteristic of their receptive field. To go beyond the classical models and following the observations from Xiao et al. [61], we propose here to model different surround geometries for MT cells receptive fields. Then, we define the so-called bio-inspired features associated to an input video, based on the average activity of MT cells. Finally, we show how these features can be used in a standard classification method to perform action recognition. Results are given for the Weizmann and KTH databases. Interestingly, we show that the diversity of motion representation at the MT level (different surround geometries), is a major advantage for action recognition. On the Weizmann database, the inclusion of different MT surround geometries improved the recognition rate from 63.01 ± 2.07% up to 99.26 ± 1.66% in the best case. Similarly, on the KTH database, the recognition rate was significantly improved with the inclusion of MT different surround geometries (from 47.82 ± 2.71% up to 92.44 ± 0.01% in the best case). We also discussed the limitations of the current approach which are closely related to the input video duration. These promising results encourage us to further develop bio-inspired models incorporating other brain mechanisms and cortical areas in order to deal with more complex videos.  相似文献   

9.
This paper presents three novel methods for implementing variable data rate (VDR) network congestion control (NCC), and compares them with traditional random (RND) drop methods. The proposed methods include: frequency (FRQ), combined random-innerloop frequency-outerloop (CRFRQ), and combined frequency-innerloop random-outerloop (CFRND). A generic model is used for RND drop that approximates congestion avoidance methods that drop packets. These methods allow NCC at intermediate network nodes using novel methods for setting data priority values. This is a first step towards demonstrating the need for stronger and efficient router technology to enhance Quality of Service (QoS) for voice applications. In the examples considered in this paper, the FRQ method reduced outage from 60% with RND to 0% with FRQ with 5 incoming flows through a bandwidth limited threshold of 1000 bits. Mean Opinion Score (MOS) was increased by a factor in the range from 0.2 to 1 on the MOS scale when using FRQ and CRFRQ over RND in the tested scenarios. CFRND performs in a similar manner to RND except it allows packet shrinking to increase network utilization. Network utilization increased by up to 37.5% over the RND method compared with the novel methods in one example where all possible outcomes for RND were analyzed. Computational complexity with the novel methods can yield a speedup from 0.02 ms faster for 1 flow to 0.56 ms faster for 10 flows over traditional lossy compression methods that rely on FFT operations, which are commonly found in lossy voice coders such as Speex, MELPe, and others. Finally, live real-time transport protocol (RTP) congestion tests were performed which confirmed the simulation results that the network congestion control methods achieve higher MOS scores and better QoS.  相似文献   

10.
Network processor technology has advanced to the point where high-precision time-based store-and-forward logic is readily incorporated into packet switches and routers. With appropriate scheduling, packets from multiple flows can be serviced without contending for link resources. Accordingly, packet flows traversing a network of switching elements can have both path and time determinacy attributes which support ideal end-to-end QoS (zero jitter, zero loss, acceptable end-to-end latency) for real-time UDP packet flows and guaranteed goodput for TCP flows. One approach to packing a network with a relatively large number of such deterministic flows, i.e. achieving high availability of the ideal QoS service in a network, uses precise buffering of packets at each switch, which introduces latency. This paper describes analysis methods for quantifying how much buffering may be necessary to achieve high (99.999%) availability. For typical network topologies the analysis shows that buffering latency requirements are very small compared to transport delays, even when the network is highly utilized with heterogeneous (e.g. voice, video, circuit emulation, and data) traffic. Actual physical implementations have empirically validated the analysis results as well as the scalability of the end-to-end, time-based forwarding approach and the end-to-end availability of ideal QoS services in IP packet networks.  相似文献   

11.
This paper experimentally investigates the role of visual complexity (VC) and prototypicality (PT) as design factors of websites, shaping users' first impressions by means of two studies. In the first study, 119 screenshots of real websites varying in VC (low vs. medium vs. high) and PT (low vs. high) were rated on perceived aesthetics. Screenshot presentation time was varied as a between-subject factor (50 ms vs. 500 ms vs. 1000 ms). Results reveal that VC and PT affect participants' aesthetics ratings within the first 50 ms of exposure. In the second study presentation times were shortened to 17, 33 and 50 ms. Results suggest that VC and PT affect aesthetic perception even within 17 ms, though the effect of PT is less pronounced than the one of VC. With increasing presentation time the effect of PT becomes as influential as the VC effect. This supports the reasoning of the information-processing stage model of aesthetic processing (Leder et al., 2004), where VC is processed at an earlier stage than PT. Overall, websites with low VC and high PT were perceived as highly appealing.  相似文献   

12.
钱琛  陈耀武 《计算机工程》2012,38(18):268-272
嵌入式视频点播在码流传输时流畅度、同步效果及资源利用率等方面存在冲突。为此,提出一种优化的码流传输同步方案。采用音视频分割同步发送算法,实现码流同步发送,使用音频自适应抖动缓冲算法、视频去抖动算法完成音视频去抖动,并降低丢包率,在解码前进行音视频2次同步,利用基于缓冲区的和式增加积式减少控制算法和实时流传输协议实现网络拥塞控制。测试结果表明,该方案的流内同步时延在100 ms内,流间同步时延在125 ms内,能优化码流流畅度和系统资源利用率。  相似文献   

13.
Kam Leung Yeung  Li Li 《Displays》2013,34(2):165-170
We have previously shown that concurrent head movements impair head-referenced image motion perception when compensatory eye movements are suppressed (Li, Adelstein, & Ellis, 2009) [16]. In this paper, we examined the effect of the field of view on perceiving world-referenced image motion during concurrent head movements. Participants rated the motion magnitude of a horizontally oscillating checkerboard image presented on a large screen while making yaw or pitch head movements, or holding their heads still. As the image motion was world-referenced, head motion elicited compensatory eye movements from the vestibular-ocular reflex to maintain the gaze on the display. The checkerboard image had either a large (73°H × 73°V) or a small (25°H × 25°V) field of view (FOV). We found that perceptual sensitivity to world-referenced image motion was reduced by 20% during yaw and pitch head movements compared to the veridical levels when the head was still, and this reduction did not depend on the display FOV size. Reducing the display FOV from 73°H × 73°V to 25°H × 25°V caused an overall underestimation of image motion by 7% across the head movement and head still conditions. We conclude that observers have reduced perceptual sensitivity to world-referenced image motion during concurrent head movements independent of the FOV size. The findings are applicable in the design of virtual environment countermeasures to mitigate perception of spurious motion arising from head tracking system latency.  相似文献   

14.
This paper proposes an integrated system for the segmentation and classification of four moving objects, including pedestrians, cars, motorcycles, and bicycles, from their side-views in a video sequence. Based on the use of an adaptive background in the red–green–blue (RGB) color model, each moving object is segmented with its minimum enclosing rectangle (MER) window by using a histogram-based projection approach or a tracking-based approach. Additionally, a shadow removal technique is applied to the segmented objects to improve the classification performance. For the MER windows with different sizes, a window scaling operation followed by an adaptive block-shifting operation is applied to obtain a fixed feature dimension. A weight mask, which is constructed according to the frequency of occurrence of an object in each position within a square window, is proposed to enhance the distinguishing pixels in the rescaled MER window. To extract classification features, a two-level Haar wavelet transform is applied to the rescaled MER window. The local shape features and the modified histogram of oriented gradients (HOG) are extracted from the level-two and level-one sub-bands, respectively, of the wavelet-transformed space. A hierarchical linear support vector machine classification configuration is proposed to classify the four classes of objects. Six video sequences are used to test the classification performance of the proposed method. The computer processing times of the object segmentation, object tracking, and feature extraction and classification approaches are 79 ms, 211 ms, and 0.01 ms, respectively. Comparisons with different well-known classification approaches verify the superiority of the proposed classification method.  相似文献   

15.
Digital video is becoming increasingly popular in higher education with faculty digitally recording and broadcasting lectures for students to learn-on-demand, such as iTunes University or YouTube. Students have discovered accelerated playback features in popular computer software and use it to reduce the amount of time watching video-enhanced instruction. In the current study, 147 undergraduate students were randomly assigned to one of six video treatments based on a 3 Video Speed (1.0 = Normal vs. 1.25 = Fast vs. 1.50 = Very Fast) × 2 Captions (Captions Present vs. Captions Absent) × 2 Trial (Trial 1 vs. Trial 2) design. Results show no significant difference on learner performance across treatments based on Video Speed. Captions were found to have a significant negative effect on learner performance. A significant difference was found on learner satisfaction in favor of a normal Video Speed. The findings suggest that learners might be able to accelerate Video Speeds up to 1.5 times the normal speed, but are generally less satisfied with the learning experience.  相似文献   

16.
In this paper a piezoelectric initially open valve was designed in low temperature co-fired ceramic (LTCC), manufactured using standard processes, and tested with integrated gas channels inside the LTCC module. Actuation of the valve was based on a piezoelectric unimorph with a diameter of 15 mm and thickness of 0.35 mm glued onto the fired LTCC substrate. Subsequently, a series of tests, including flow, displacement and switching time measurements, was carried out. Measurements of the valve revealed a flow of 143 ml/min under 1 bar pressure, leakage levels of 4%, valve displacement of 1.3 μm, and closing times less than 30 ms. Additional miniaturization and integration of an embedded valve in the LTCC will be pursued, enabling improved manufacturing as a batch process and micro- and nano-litre fluid management for various applications.  相似文献   

17.
Piezoresistive composites with high hardness and conductivity are required for circuit breakers for multi-cycle operation under large current flow. Based on the simulation results for the mechanical behavior of piezoresistive composites, we developed piezoresistive composites with conductive TiB2 ceramic materials and silicone rubber. TiB2 up to 70 vol.% was embedded into the polymer matrix without any mechanical deterioration while the electrical resistance was decreased with increasing TiB2 content. Piezoresistive composites with 70 vol.% TiB2 particles exhibited a resistance of 1.7 Ω at a pressure of 1.1 MPa. A circuit breaker with the fabricated piezoresistive composites acted as a switch with a response time of around 2 ms.  相似文献   

18.
In service oriented scenarios, applications are created by composing atomic services and exposing the resulting added value logic as a service. When several alternative service providers are available for composition, quality of service (QoS) properties such as execution time, cost, or availability are taken into account to make the choice, leading to the creation of QoS-aware composite web services. Finding the set of service providers that result in the best QoS is a NP-hard optimization problem. This paper presents QoS-Gasp, a metaheuristic algorithm for performing QoS-aware web service composition at runtime. QoS-Gasp is an hybrid approach that combines GRASP with Path Relinking. For the evaluation of our approach we compared it with related metaheuristic algorithms found in the literature. Experiments show that when results must be available in seconds, QoS-Gasp improves the results of previous proposals up to 40%. Beside this, QoS-Gasp found better solutions than any of the compared techniques in a 92% of the runs when results must be available in 100 ms; i.e., it provides compositions with a better QoS, implying cost savings, increased availability and reduced execution times for the end-user.  相似文献   

19.
This paper presents the techniques used for the characterisation of a new type of standing-wave piezoelectric ultrasonic motor. The motor uses a metallic flextensional amplifier, or “cymbal”, to convert the radial mode vibrations of a piezoelectric ceramic disc into flexural oscillations, which are further converted to produce rotary actuation by means of an elastic fin friction drive. The motor operates on a single-phase electrical supply. A beryllium copper rotor design with three-fin configuration was adopted. The best stall torque, no load speed, transient time and efficiency for a 25 mm motor were 2 N mm, 680 rpm, 2 ms and 4.8%, respectively. The operational characteristics of the motor were evaluated by using two methods: one based on the pulley–brake principle and one on high-speed imaging. The results obtained from using these two techniques are contrasted and compared.  相似文献   

20.
The study examines the effect of four important aspects of film skimming, including segmentation process, proportion of total skimmed length (TSL), multiple cues available, and genre/domain of the film. We design three experiments to explore their effects on representativeness for video skim. The results of Experiment 1 show that the skimmed video combined with 10% of total skimmed length and 5 or 10 s of skimmed segment (SS) is more efficient for representativeness. The results of Experiment 2 show that the skimmed video with mostly ending part and multiple cues can significantly improve representativeness. The results of Experiment 3 reveal that the representativeness of skimmed video with different types of movie is significantly different.In our experiments, the proportion of TSL is set to three levels, 5%, 10%, and 15%, while the size of SS is also set to three levels, 2.5, 5, and 10 s for the segmentation process. We observe that the skimmed video with the longer TSL and SS has the better representativeness of movie content, but the four combinations for 10% and 15% with 5 s and 10 s are insignificantly different. The finding is helpful for reducing the time cost of skimming video. Furthermore, we applied two important factors—personality focus of the medium and multiple cues, from media richness theory to our skimming method in order to raise the representativeness of video skim for different films. In the personality focus of the medium, we define a movie as having three parts—beginning, middle, and ending. In the multiple cues, the skimmed video with synchronized subtitle, audio, and video can assist our comprehension and reduce the uncertainty. We find that the skimmed video with mostly ending part and synchronized subtitle, audio, and video can raise the representativeness of movie content.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号