This article begins by looking at changes in the student body in recent years as useful indicators of how libraries need to alter and adapt their student library provision. Among the concepts explored are the student as consumer-customer, the Google generation, greater than ever competition among students especially for the job market, new technology as fashion accessory as much as learning tool, the widening participation agenda for higher education, and the almost ubiquitous presence of multi-functional electronic devices. There is an examination of some challenges for librarians in meeting the needs and demands of this new generation of students. These include the balancing of electronic and print provision, the future of the physical library environment, push rather than pull to deliver library services, anytime anywhere access to information, integration with other university services, and helping to fit students for the world of work. The main message of the article is that librarians are not under threat from the giant search engines as long as they embrace the interactive technologies which students so willingly and expertly use, and adapt them to get library services out to students as customers. 相似文献
Abstract This paper describes the making of a short film on the Xian terra‐cotta soldiers using our integrated HU‐MANOID software. The method of creating and animating the soldiers’ faces is first presented. Then, we show how our approach, based on metaballs and spline surfaces, was used for designing and deforming soldiers’ bodies. For the animation of the bodies, we describe the motion control methods. Clothes for the soldiers are then described as well as horses and decor design. For the rendering, we explained our strategy using parallel machines. Finally, problems of integration are addressed. 相似文献
Streamflow from the mountains is the main source of water for the lower plain in arid regions. Accurate simulation of streamflow
is of great importance to the arid ecosystem. However, many large arid drainage basins in northwestern China have low density
of precipitation stations, which makes the streamflow modeling and prediction very difficult. Based on raingauge data and
Tropical Rainfall Measuring Mission (TRMM) data combined with raingauge data, different approaches were explored for spatializing
precipitation in large area with scarce raingauges. Spatialized precipitation was then input into Soil and Water Assessment
Tool (SWAT), a semi-distributed hydrological model, to simulate streamflow. Results from a case study in the Manas river basin
showed that simulated hydrographs using both the approaches are able to reproduce the watershed hydrological behavior. Moreover,
statistical assessment indicated that hydrological model driven by the spatialized precipitation based on radar combined with
raingauge data performed better than that based on gauge data. Radar precipitation estimator can provide a practical data
source for hydrological modeling at a basin scale where the raingauge network is sparse. 相似文献
Recent progress in modelling, animation and rendering means that rich, high fidelity virtual worlds are found in many interactive graphics applications. However, the viewer's experience of a 3D world is dependent on the nature of the virtual cinematography, in particular, the camera position, orientation and motion in relation to the elements of the scene and the action. Camera control encompasses viewpoint computation, motion planning and editing. We present a range of computer graphics applications and draw on insights from cinematographic practice in identifying their different requirements with regard to camera control. The nature of the camera control problem varies depending on these requirements, which range from augmented manual control (semi‐automatic) in interactive applications, to fully automated approaches. We review the full range of solution techniques from constraint‐based to optimization‐based approaches, and conclude with an examination of occlusion management and expressiveness in the context of declarative approaches to camera control.相似文献
We present here a new randomized algorithm for repairing the topology of objects represented by 3D binary digital images.
By “repairing the topology”, we mean a systematic way of modifying a given binary image in order to produce a similar binary
image which is guaranteed to be well-composed. A 3D binary digital image is said to be well-composed if, and only if, the square faces shared by background and foreground
voxels form a 2D manifold. Well-composed images enjoy some special properties which can make such images very desirable in
practical applications. For instance, well-known algorithms for extracting surfaces from and thinning binary images can be
simplified and optimized for speed if the input image is assumed to be well-composed. Furthermore, some algorithms for computing
surface curvature and extracting adaptive triangulated surfaces, directly from the binary data, can only be applied to well-composed
images. Finally, we introduce an extension of the aforementioned algorithm to repairing 3D digital multivalued images. Such
an algorithm finds application in repairing segmented images resulting from multi-object segmentations of other 3D digital
multivalued images.
This paper is concerned with the derivation of infinite schedules for timed automata that are in some sense optimal. To cover
a wide class of optimality criteria we start out by introducing an extension of the (priced) timed automata model that includes
both costs and rewards as separate modelling features. A precise definition is then given of what constitutes optimal infinite
behaviours for this class of models. We subsequently show that the derivation of optimal non-terminating schedules for such
double-priced timed automata is computable. This is done by a reduction of the problem to the determination of optimal mean-cycles
in finite graphs with weighted edges. This reduction is obtained by introducing the so-called corner-point abstraction, a
powerful abstraction technique of which we show that it preserves optimal schedules.
This work has been mostly done while visiting CISS at Aalborg University in Denmark and has been supported by CISS and by
ACI Cortos, a program of the French Ministry of Research. 相似文献
Proteins take on their function in the cell by interacting with other proteins or biomolecular complexes. To study this process, computational methods, collectively named protein docking, are used to predict the position and orientation of a protein ligand when it is bound to a protein receptor or enzyme, taking into account chemical or physical criteria. This process is intensively studied to discover new biological functions for proteins and to better understand how these macromolecules take on these functions at the molecular scale. Pharmaceutical research also employs docking techniques for a variety of purposes, most notably in the virtual screening of large databases of available chemicals to select likely molecular candidates for drug design. The basic hypothesis of our work is that Virtual Reality (VR) and multimodal interaction can increase efficiency in reaching and analysing docking solutions, in addition to fully a computational docking approach. To this end, we conducted an ergonomic analysis of the protein–protein current docking task as it is carried out today. Using these results, we designed an immersive and multimodal application where VR devices, such as the three-dimensional mouse and haptic devices, are used to interactively manipulate two proteins to explore possible docking solutions. During this exploration, visual, audio, and haptic feedbacks are combined to render and evaluate chemical or physical properties of the current docking configuration. 相似文献
When European laboratories decided to develop a digital sound broadcasting system (DSB), they specified three main conditions to fulfil:
quality improvement up to the level of ‘CD’ sound, even in difficult reception conditions (mobile vehicles, etc)
additional significant digital data transmissions in order to transform sound broadcasting into a really new service
the possibility of a common system for satellite and terrestrial transmissions.
It is on these bases that the European project ‘Eureka 147’ defined the system called DAB (digital audio broadcasting). In 1992, the ITU Conference WARC 92 allocated 40 MHz to DSB in the L-band in the configuration of complementary terrestrial and satellite networks; nevertheless, the present state of technical possibilities makes such mixed networks almost unfeasible and the lack of available spectrum in VHF bands led a significant number of countries to envisage L-band for T-DAB. The situation could turn to a competition between terrestrial and satellite networks, especially because the bandwidth in L-band is not that large! France belongs to the countries facing this problem. L-band alone is intended to be used by T-DAB, and broadcasters taking part in the work of ‘Club DAB’ estimated that 20 MHz would be a minimum to ensure the success of T-DAB introduction. It is half of the DSB band. Splitting this band into two parts has already been decided by CEPT, but in the proportion of 1/3 for T-DAB. This organization intends to arrange a European planning meeting for T-DAB in July 1995, and, shortly after, several countries are ready to start the implementation of the terrestrial networks. At the same time, international broadcasters wonder whether satellite transmission could present an alternative to HF. 相似文献
The classification task usually works with flat and batch learners, assuming problems as stationary and without relations between class labels. Nevertheless, several real-world problems do not assume these premises, i.e., data have labels organized hierarchically and are made available in streaming fashion, meaning that their behavior can drift over time. Existing studies on hierarchical classification do not consider data streams as input of their process, and thus, data is assumed as stationary and handled through batch learners. The same can be said about works on streaming data, as the hierarchical classification is overlooked. Studies concerning each area individually are promising, yet, do not tackle their intersection. This study analyzes the main characteristics of the state-of-the-art works on hierarchical classification for streaming data concerning five aspects: (i) problems tackled, (ii) datasets, (iii) algorithms, (iv) evaluation metrics, and (v) research gaps in the area. We performed a systematic literature review of primary studies and retrieved 3,722 papers, of which 42 were identified as relevant and used to answer the aforementioned research questions. We found that the problems handled by hierarchical classification of data streams include mainly classification of images, human activities, texts, and audio; the datasets are mostly created or synthetic data; the algorithms and evaluation metrics are well-known techniques or based on those; and research gaps are related to dynamic context, data complexity, and computational resources constraints. We also provide implications for future research and experiments to consider common characteristics shared amongst hierarchical classification and data stream classification.