首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1675篇
  免费   47篇
  国内免费   1篇
电工技术   10篇
化学工业   253篇
金属工艺   22篇
机械仪表   35篇
建筑科学   98篇
矿业工程   10篇
能源动力   27篇
轻工业   264篇
水利工程   8篇
石油天然气   3篇
无线电   87篇
一般工业技术   188篇
冶金工业   494篇
原子能技术   2篇
自动化技术   222篇
  2022年   30篇
  2021年   36篇
  2020年   17篇
  2019年   23篇
  2018年   29篇
  2017年   30篇
  2016年   36篇
  2015年   30篇
  2014年   46篇
  2013年   111篇
  2012年   64篇
  2011年   90篇
  2010年   85篇
  2009年   69篇
  2008年   83篇
  2007年   79篇
  2006年   88篇
  2005年   75篇
  2004年   49篇
  2003年   43篇
  2002年   43篇
  2001年   30篇
  2000年   25篇
  1999年   30篇
  1998年   34篇
  1997年   29篇
  1996年   31篇
  1995年   29篇
  1994年   33篇
  1993年   20篇
  1992年   16篇
  1991年   11篇
  1990年   14篇
  1989年   18篇
  1988年   20篇
  1987年   22篇
  1986年   17篇
  1985年   27篇
  1984年   19篇
  1983年   17篇
  1982年   9篇
  1981年   17篇
  1980年   8篇
  1979年   11篇
  1978年   6篇
  1977年   6篇
  1975年   7篇
  1974年   6篇
  1973年   5篇
  1971年   5篇
排序方式: 共有1723条查询结果,搜索用时 14 毫秒
91.
To date, long-term preservation approaches have comprised of emulation, migration, normalization, and metadata – or some combination of these. Most existing work has focussed on applying these approaches to digital objects of a singular media type: text, HTML, images, video or audio. In this paper, we consider the preservation of composite, mixed-media digital objects, a rapidly growing class of resources. We describe an integrated, flexible system that we have developed, which leverages existing tools and services and assists organizations to dynamically discover the optimum preservation strategy as it is required. The system captures and periodically compares preservation metadata with software and format registries to determine those objects (or sub-objects) at risk. By making preservation software modules available as Web services and describing them semantically using a machine-processable ontology (OWL-S), the most appropriate preservation service(s) for each object (or sub-object) can then be dynamically discovered, composed and invoked by software agents (with optional human input at critical decision-making steps). The PANIC system successfully illustrates how the growing array of available preservation tools and services can be integrated to provide a sustainable, collaborative solution to the long-term preservation of large-scale collections of complex digital objects.  相似文献   
92.
Li  Qin  You  Jane 《Multimedia Tools and Applications》2019,78(21):30397-30418

Two-dimensional Linear Discriminant Analysis (2DLDA), which is supervised and extracts the most discriminating features, has been widely used in face image representation and recognition. However, 2DLDA is inapplicable to many real-world situations because it assumes that the input data obeys the Gaussian distribution and emphasizes the global relationship of data merely. To handle this problem, we present a Two-dimensional Locality Adaptive Discriminant Analysis (2DLADA). Compared to 2DLDA, our method has two salient advantages: (1) it does not depend on any assumptions on the data distribution and is more suitable in real world applications; (2) it adaptively exploits the intrinsic local structure of data manifold. Performance on artificial dataset and real-world datasets demonstrate the superiority of our proposed method.

  相似文献   
93.
94.
Spam filtering is a text classification task to which Case-Based Reasoning (CBR) has been successfully applied. We describe the ECUE system, which classifies emails using a feature-based form of textual CBR. Then, we describe an alternative way to compute the distances between cases in a feature-free fashion, using a distance measure based on text compression. This distance measure has the advantages of having no set-up costs and being resilient to concept drift. We report an empirical comparison, which shows the feature-free approach to be more accurate than the feature-based system. These results are fairly robust over different compression algorithms in that we find that the accuracy when using a Lempel-Ziv compressor (GZip) is approximately the same as when using a statistical compressor (PPM). We note, however, that the feature-free systems take much longer to classify emails than the feature-based system. Improvements in the classification time of both kinds of systems can be obtained by applying case base editing algorithms, which aim to remove noisy and redundant cases from a case base while maintaining, or even improving, generalisation accuracy. We report empirical results using the Competence-Based Editing (CBE) technique. We show that CBE removes more cases when we use the distance measure based on text compression (without significant changes in generalisation accuracy) than it does when we use the feature-based approach.  相似文献   
95.
Software developers have individual styles of programming. This paper empirically examines the validity of the consistent programmer hypothesis: that a facet or set of facets exist that can be used to recognize the author of a given program based on programming style. The paper further postulates that the programming style means that different test strategies work better for some programmers (or programming styles) than for others. For example, all‐edges adequate tests may detect faults for programs written by Programmer A better than for those written by Programmer B. This has several useful applications: to help detect plagiarism/copyright violation of source code, to help improve the practical application of software testing, and to help pursue specific rogue programmers of malicious code and source code viruses. This paper investigates this concept by experimentally examining whether particular facets of the program can be used to identify programmers and whether testing strategies can be reasonably associated with specific programmers. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
96.
Reviews the books, Madhouse: A Tragic Tale of Megalomania and Modern Medicine by Andrew Scull (see record 2005-06776-000); and The Lobotomist: A Maverick Medical Genius and His Tragic Quest to Rid the World of Mental Illness by Jack El-Hai (see record 2005-02343-000). In both books, the history of experimental clinical psychiatry is laid bare with devastating accounts of the efforts to conquer mental illness by any means necessary. Both books are fascinating reading and may illuminate our current context in which the biological avenues for treating mental disorders continue to traffic in hopes of a one-size-fits-all cure, while psychoanalysis ambivalently struggles with how to conduct rigorous research to demonstrate the efficacy of our treatment. Andrew Scull's book Madhouse offers a well-documented historical account of a bizarre episode in American psychiatric history. The centerpiece of Scull's investigative work is Henry Cotton, MD, the superintendent of the Trenton State Hospital in Trenton, New Jersey, from 1907-1930. Once Cotton arrived at Trenton, he was appalled by the conditions he found and instituted reforms such as eliminating the culture of violence by attendants, removing over 700 pieces of restraining equipment from the hospital, and introducing occupational therapy. Jack El-Hai gives us the next segment of psychiatric surgery in his book The Lobotomist, a biography of the neurologist, turned surgical outlaw, Walter Freeman, MD. Walter Freeman was a neurologist fascinated with science and experimentation. Settling into work at St. Elizabeth's hospital in Washington, DC, in 1924, Freeman eventually joined the faculty of George Washington University where he remained until 1954. At that time neurosyphilis was the scourge of mental hospitals producing thousands of victims who were totally disabled by the neurological sequellae of tertiary illness. Thus lobotomy became an efficient outpatient procedure that could be applied to a larger patient population. Both of these books are important reading. Of all the great medical advances of the last century, surely the one that stands out as perhaps the greatest is the Nuremberg Code of 1947, which requires a competent patient giving informed consent to treatment and to research efforts. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
97.
Automated segmentation of blood vessels in retinal images can help ophthalmologists screen larger populations for vessel abnormalities. However, automated vessel extraction is difficult due to the fact that the width of retinal vessels can vary from very large to very small, and that the local contrast of vessels is unstable. Further, the small vessels are overwhelmed by Gaussian-like noises. Therefore the accurate segmentation and width estimation of small vessels are very challenging. In this paper, we propose a simple and efficient multiscale vessel extraction scheme by multiplying the responses of matched filters at three scales. Since the vessel structures will have relatively strong responses to the matched filters at different scales but the background noises will not, scale production could further enhance vessels while suppressing noise. After appropriate selection of scale parameters and appropriate normalization of filter responses, the filter responses are then extracted and fused in the scale production domain. The experimental results demonstrate that the proposed method works well for accurately segmenting vessels with good width estimation.  相似文献   
98.

Introduction

Hyperglycaemia is a common complication of stress and prematurity in extremely low-birth-weight infants. Model-based insulin therapy protocols have the ability to safely improve glycaemic control for this group. Estimating non-insulin-mediated brain glucose uptake by the central nervous system in these models is typically done using population-based body weight models, which may not be ideal.

Method

A head circumference-based model that separately treats small-for-gestational-age (SGA) and appropriate-for-gestational-age (AGA) infants is compared to a body weight model in a retrospective analysis of 48 patients with a median birth weight of 750 g and median gestational age of 25 weeks. Estimated brain mass, model-based insulin sensitivity (SI) profiles, and projected glycaemic control outcomes are investigated. SGA infants (5) are also analyzed as a separate cohort.

Results

Across the entire cohort, estimated brain mass deviated by a median 10% between models, with a per-patient median difference in SI of 3.5%. For the SGA group, brain mass deviation was 42%, and per-patient SI deviation 13.7%. In virtual trials, 87–93% of recommended insulin rates were equal or slightly reduced (Δ < 0.16 mU/h) under the head circumference method, while glycaemic control outcomes showed little change.

Conclusion

The results suggest that body weight methods are not as accurate as head circumference methods. Head circumference-based estimates may offer improved modelling accuracy and a small reduction in insulin administration, particularly for SGA infants.  相似文献   
99.
The elicitation or communication of user requirements comprises an early and critical but highly error-prone stage in system development. Socially oriented methodologies provide more support for user involvement in design than the rigidity of more traditional methods, facilitating the degree of user–designer communication and the ‘capture’ of requirements. A more emergent and collaborative view of requirements elicitation and communication is required to encompass the user, contextual and organisational factors. From this accompanying literature in communication issues in requirements elicitation, a four-dimensional framework is outlined and used to appraise comparatively four different methodologies seeking to promote a closer working relationship between users and designers. The facilitation of communication between users and designers is subject to discussion of the ways in which communicative activities can be ‘optimised’ for successful requirements gathering, by making recommendations based on the four dimensions to provide fruitful considerations for system designers.  相似文献   
100.
In this work we present Bio-PEPA, a process algebra for the modelling and the analysis of biochemical networks. It is a modification of PEPA, originally defined for the performance analysis of computer systems, in order to handle some features of biological models, such as stoichiometry and the use of general kinetic laws. Bio-PEPA may be seen as an intermediate, formal, compositional representation of biological systems, on which different kinds of analyses can be carried out. Bio-PEPA is enriched with some notions of equivalence. Specifically, the isomorphism and strong bisimulation for PEPA have been considered and extended to our language. Finally, we show the translation of a biological model into the new language and we report some analysis results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号