To date, long-term preservation approaches have comprised of emulation, migration, normalization, and metadata – or some combination
of these. Most existing work has focussed on applying these approaches to digital objects of a singular media type:
text, HTML, images, video or audio. In this paper, we consider the preservation of composite, mixed-media digital objects,
a rapidly growing class of resources. We describe an integrated, flexible system that we have developed, which leverages existing
tools and services and assists organizations to dynamically discover the optimum preservation strategy as it is required.
The system captures and periodically compares preservation metadata with software and format registries to determine those
objects (or sub-objects) at risk. By making preservation software modules available as Web services and describing them semantically
using a machine-processable ontology (OWL-S), the most appropriate preservation service(s) for each object (or sub-object)
can then be dynamically discovered, composed and invoked by software agents (with optional human input at critical decision-making
steps). The PANIC system successfully illustrates how the growing array of available preservation tools and services can be
integrated to provide a sustainable, collaborative solution to the long-term preservation of large-scale collections of complex
digital objects. 相似文献
Two-dimensional Linear Discriminant Analysis (2DLDA), which is supervised and extracts the most discriminating features, has been widely used in face image representation and recognition. However, 2DLDA is inapplicable to many real-world situations because it assumes that the input data obeys the Gaussian distribution and emphasizes the global relationship of data merely. To handle this problem, we present a Two-dimensional Locality Adaptive Discriminant Analysis (2DLADA). Compared to 2DLDA, our method has two salient advantages: (1) it does not depend on any assumptions on the data distribution and is more suitable in real world applications; (2) it adaptively exploits the intrinsic local structure of data manifold. Performance on artificial dataset and real-world datasets demonstrate the superiority of our proposed method.
Spam filtering is a text classification task to which Case-Based Reasoning (CBR) has been successfully applied. We describe
the ECUE system, which classifies emails using a feature-based form of textual CBR. Then, we describe an alternative way to
compute the distances between cases in a feature-free fashion, using a distance measure based on text compression. This distance
measure has the advantages of having no set-up costs and being resilient to concept drift. We report an empirical comparison,
which shows the feature-free approach to be more accurate than the feature-based system. These results are fairly robust over
different compression algorithms in that we find that the accuracy when using a Lempel-Ziv compressor (GZip) is approximately
the same as when using a statistical compressor (PPM). We note, however, that the feature-free systems take much longer to
classify emails than the feature-based system. Improvements in the classification time of both kinds of systems can be obtained
by applying case base editing algorithms, which aim to remove noisy and redundant cases from a case base while maintaining,
or even improving, generalisation accuracy. We report empirical results using the Competence-Based Editing (CBE) technique.
We show that CBE removes more cases when we use the distance measure based on text compression (without significant changes
in generalisation accuracy) than it does when we use the feature-based approach. 相似文献
Reviews the books, Madhouse: A Tragic Tale of Megalomania and Modern Medicine by Andrew Scull (see record 2005-06776-000); and The Lobotomist: A Maverick Medical Genius and His Tragic Quest to Rid the World of Mental Illness by Jack El-Hai (see record 2005-02343-000). In both books, the history of experimental clinical psychiatry is laid bare with devastating accounts of the efforts to conquer mental illness by any means necessary. Both books are fascinating reading and may illuminate our current context in which the biological avenues for treating mental disorders continue to traffic in hopes of a one-size-fits-all cure, while psychoanalysis ambivalently struggles with how to conduct rigorous research to demonstrate the efficacy of our treatment. Andrew Scull's book Madhouse offers a well-documented historical account of a bizarre episode in American psychiatric history. The centerpiece of Scull's investigative work is Henry Cotton, MD, the superintendent of the Trenton State Hospital in Trenton, New Jersey, from 1907-1930. Once Cotton arrived at Trenton, he was appalled by the conditions he found and instituted reforms such as eliminating the culture of violence by attendants, removing over 700 pieces of restraining equipment from the hospital, and introducing occupational therapy. Jack El-Hai gives us the next segment of psychiatric surgery in his book The Lobotomist, a biography of the neurologist, turned surgical outlaw, Walter Freeman, MD. Walter Freeman was a neurologist fascinated with science and experimentation. Settling into work at St. Elizabeth's hospital in Washington, DC, in 1924, Freeman eventually joined the faculty of George Washington University where he remained until 1954. At that time neurosyphilis was the scourge of mental hospitals producing thousands of victims who were totally disabled by the neurological sequellae of tertiary illness. Thus lobotomy became an efficient outpatient procedure that could be applied to a larger patient population. Both of these books are important reading. Of all the great medical advances of the last century, surely the one that stands out as perhaps the greatest is the Nuremberg Code of 1947, which requires a competent patient giving informed consent to treatment and to research efforts. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
Automated segmentation of blood vessels in retinal images can help ophthalmologists screen larger populations for vessel abnormalities. However, automated vessel extraction is difficult due to the fact that the width of retinal vessels can vary from very large to very small, and that the local contrast of vessels is unstable. Further, the small vessels are overwhelmed by Gaussian-like noises. Therefore the accurate segmentation and width estimation of small vessels are very challenging. In this paper, we propose a simple and efficient multiscale vessel extraction scheme by multiplying the responses of matched filters at three scales. Since the vessel structures will have relatively strong responses to the matched filters at different scales but the background noises will not, scale production could further enhance vessels while suppressing noise. After appropriate selection of scale parameters and appropriate normalization of filter responses, the filter responses are then extracted and fused in the scale production domain. The experimental results demonstrate that the proposed method works well for accurately segmenting vessels with good width estimation. 相似文献
Hyperglycaemia is a common complication of stress and prematurity in extremely low-birth-weight infants. Model-based insulin therapy protocols have the ability to safely improve glycaemic control for this group. Estimating non-insulin-mediated brain glucose uptake by the central nervous system in these models is typically done using population-based body weight models, which may not be ideal.
Method
A head circumference-based model that separately treats small-for-gestational-age (SGA) and appropriate-for-gestational-age (AGA) infants is compared to a body weight model in a retrospective analysis of 48 patients with a median birth weight of 750 g and median gestational age of 25 weeks. Estimated brain mass, model-based insulin sensitivity (SI) profiles, and projected glycaemic control outcomes are investigated. SGA infants (5) are also analyzed as a separate cohort.
Results
Across the entire cohort, estimated brain mass deviated by a median 10% between models, with a per-patient median difference in SI of 3.5%. For the SGA group, brain mass deviation was 42%, and per-patient SI deviation 13.7%. In virtual trials, 87–93% of recommended insulin rates were equal or slightly reduced (Δ < 0.16 mU/h) under the head circumference method, while glycaemic control outcomes showed little change.
Conclusion
The results suggest that body weight methods are not as accurate as head circumference methods. Head circumference-based estimates may offer improved modelling accuracy and a small reduction in insulin administration, particularly for SGA infants. 相似文献
The elicitation or communication of user requirements comprises an early and critical but highly error-prone stage in system
development. Socially oriented methodologies provide more support for user involvement in design than the rigidity of more
traditional methods, facilitating the degree of user–designer communication and the ‘capture’ of requirements. A more emergent
and collaborative view of requirements elicitation and communication is required to encompass the user, contextual and organisational
factors. From this accompanying literature in communication issues in requirements elicitation, a four-dimensional framework
is outlined and used to appraise comparatively four different methodologies seeking to promote a closer working relationship
between users and designers. The facilitation of communication between users and designers is subject to discussion of the
ways in which communicative activities can be ‘optimised’ for successful requirements gathering, by making recommendations
based on the four dimensions to provide fruitful considerations for system designers. 相似文献
In this work we present Bio-PEPA, a process algebra for the modelling and the analysis of biochemical networks. It is a modification of PEPA, originally defined for the performance analysis of computer systems, in order to handle some features of biological models, such as stoichiometry and the use of general kinetic laws. Bio-PEPA may be seen as an intermediate, formal, compositional representation of biological systems, on which different kinds of analyses can be carried out. Bio-PEPA is enriched with some notions of equivalence. Specifically, the isomorphism and strong bisimulation for PEPA have been considered and extended to our language. Finally, we show the translation of a biological model into the new language and we report some analysis results. 相似文献