首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   625篇
  免费   25篇
电工技术   9篇
化学工业   152篇
金属工艺   12篇
机械仪表   10篇
建筑科学   45篇
矿业工程   1篇
能源动力   8篇
轻工业   36篇
水利工程   1篇
无线电   40篇
一般工业技术   118篇
冶金工业   43篇
原子能技术   14篇
自动化技术   161篇
  2023年   6篇
  2022年   7篇
  2021年   22篇
  2020年   13篇
  2019年   7篇
  2018年   12篇
  2017年   14篇
  2016年   23篇
  2015年   26篇
  2014年   28篇
  2013年   40篇
  2012年   46篇
  2011年   51篇
  2010年   34篇
  2009年   35篇
  2008年   32篇
  2007年   45篇
  2006年   24篇
  2005年   25篇
  2004年   14篇
  2003年   14篇
  2002年   15篇
  2001年   7篇
  2000年   9篇
  1999年   9篇
  1998年   12篇
  1997年   12篇
  1996年   4篇
  1995年   6篇
  1994年   3篇
  1993年   2篇
  1992年   5篇
  1990年   3篇
  1988年   3篇
  1987年   4篇
  1986年   4篇
  1985年   3篇
  1980年   2篇
  1978年   1篇
  1977年   2篇
  1976年   9篇
  1975年   1篇
  1974年   2篇
  1973年   1篇
  1968年   2篇
  1966年   2篇
  1958年   1篇
  1957年   1篇
  1956年   2篇
  1954年   2篇
排序方式: 共有650条查询结果,搜索用时 15 毫秒
1.
2.
Eingegangen am 13.11.1996, in überarbeiteter Form am 12.5.1997  相似文献   
3.
OBJECTIVE: To compare a system that continuously monitors cardiac output by the Fick principle with measurements by the thermodilution technique in pediatric patients. DESIGN: Prospective direct comparison of the above two techniques. SETTING: Pediatric intensive care unit of a university hospital. PATIENTS: 25 infants and children, aged 1 week to 17 years (median 10 months), who had undergone open heart surgery were studied. Only patients without an endotracheal tube leak and without a residual shunt were included. METHODS: The system based on the Fick principle uses measurements of oxygen consumption taken by a metabolic monitor and of arterial and mixed venous oxygen saturation taken by pulse- and fiberoptic oximetry to calculate cardiac output every 20s. INTERVENTIONS: In every patient one pair of measurements was taken. Continuous Fick and thermodilution cardiac output measurements were performed simultaneously, with the examiners remaining ignorant of the results of the other method. RESULTS: Cardiac output measurements ranged from 0.21 to 4.55 l/min. A good correlation coefficient was found: r2 = 0.98; P < 0.001; SEE = 0.41 l/min. The bias is absolute values and in percent of average cardiac output was - 0.05 l/min or - 4.4% with a precision of 0.32 l/min or 21.3% at 2 SD, respectively. The difference was most marked in a neonate with low cardiac output. CONCLUSION: Continuous measurement of cardiac output by the Fick principle offers a convenient method for the hemodynamic monitoring of unstable infants and children.  相似文献   
4.
The erucic acid content of broccoli florets, sprouts, and seeds was found to be about 0.8, 320, and 12100 mg/100 g, respectively. Using the erucic acid limit established for canola oil in the U.S.A. and Canada as a guideline, the estimated dietary intake of erucic acid from florets and sprouts was considered of little consequence, whereas in seeds a relatively small amount (about 35 g/wk) equaled our calculated exposure limit for erucic acid. Additionally, the most complete fatty acid distribution yet published for the various forms of broccoli are presented.  相似文献   
5.
Pathogenic variants in KCNA2, encoding for the voltage-gated potassium channel Kv1.2, have been identified as the cause for an evolving spectrum of neurological disorders. Affected individuals show early-onset developmental and epileptic encephalopathy, intellectual disability, and movement disorders resulting from cerebellar dysfunction. In addition, individuals with a milder course of epilepsy, complicated hereditary spastic paraplegia, and episodic ataxia have been reported. By analyzing phenotypic, functional, and genetic data from published reports and novel cases, we refine and further delineate phenotypic as well as functional subgroups of KCNA2-associated disorders. Carriers of variants, leading to complex and mixed channel dysfunction that are associated with a gain- and loss-of-potassium conductance, more often show early developmental abnormalities and an earlier onset of epilepsy compared to individuals with variants resulting in loss- or gain-of-function. We describe seven additional individuals harboring three known and the novel KCNA2 variants p.(Pro407Ala) and p.(Tyr417Cys). The location of variants reported here highlights the importance of the proline(405)–valine(406)–proline(407) (PVP) motif in transmembrane domain S6 as a mutational hotspot. A novel case of self-limited infantile seizures suggests a continuous clinical spectrum of KCNA2-related disorders. Our study provides further insights into the clinical spectrum, genotype–phenotype correlation, variability, and predicted functional impact of KCNA2 variants.  相似文献   
6.
Many business processes are modeled as workflows, which often need to comply with business rules, legal requirements, and authorization policies. Workflow satisfiability is the problem of determining whether there exists a workflow instance that realizes the workflow specification while simultaneously complying with such constraints. This problem has already been studied by the computer security community, with the development of algorithms and the study of their worst-case complexity. These solutions are often tailored to a particular workflow model and are, therefore, of little or no use in analyzing different models; their worst-case complexities are likely to be an unreliable judge of their feasibility; and they lack support for other forms of analysis such as the determination of the smallest number of users required to satisfy a workflow specification. We propose model checking of an NP-complete fragment $\mathsf{LTL }(\mathsf{F })$ of propositional linear-time temporal logic as an alternative solution. We report encodings in LTL(F) that can compute a set of solutions (thus deciding satisfiability), compute minimal user bases and a safe bound on the resiliency of satisfiability under the removal of users. These theoretical contributions are validated through detailed experiments whose results attest to the viability of our proposed approach.  相似文献   
7.
During software system evolution, software architects intuitively trade off the different architecture alternatives for their extra-functional properties, such as performance, maintainability, reliability, security, and usability. Researchers have proposed numerous model-driven prediction methods based on queuing networks or Petri nets, which claim to be more cost-effective and less error-prone than current practice. Practitioners are reluctant to apply these methods because of the unknown prediction accuracy and work effort. We have applied a novel model-driven prediction method called Q-ImPrESS on a large-scale process control system from ABB consisting of several million lines of code. This paper reports on the achieved performance prediction accuracy and reliability prediction sensitivity analyses as well as the effort in person hours for achieving these results.  相似文献   
8.
The current Web Services Agreement specification draft proposes a simple request-response protocol for agreement creation only addressing bilateral offer exchanges. This paper proposes a framework augmenting this WS-Agreement to enable negotiations according to a variety of bilateral and multilateral negotiation protocols. The framework design is based on a thorough analysis of taxonomies for negotiations from the literature in order to allow for capturing a variety of different negotiation models within a single, WS-Agreement compatible, framework. In order to provide for the intended flexibility, the proposed protocol takes a two-stage approach: a meta-protocol is conducted among interested parties to agree on a common negotiation protocol first before the real negotiation is carried out in the second step due to the protocol established in the first step.  相似文献   
9.
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.  相似文献   
10.
3D mapping is very challenging in the underwater domain, especially due to the lack of high resolution, low noise sensors. A new spectral registration method is presented that can determine the spatial 6 DOF transformation between pairs of very noisy 3D scans with only partial overlap. The approach is hence suited to cope with sonar as the predominant underwater sensor. The spectral registration method is based on Phase Only Matched Filtering (POMF) on non-trivially resampled spectra of the 3D data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号