首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   116篇
  免费   1篇
电工技术   2篇
化学工业   12篇
金属工艺   1篇
机械仪表   2篇
建筑科学   1篇
轻工业   8篇
无线电   8篇
一般工业技术   24篇
冶金工业   22篇
自动化技术   37篇
  2023年   5篇
  2022年   5篇
  2021年   2篇
  2020年   3篇
  2019年   1篇
  2018年   2篇
  2017年   4篇
  2016年   2篇
  2015年   5篇
  2014年   3篇
  2013年   2篇
  2012年   9篇
  2011年   7篇
  2010年   6篇
  2009年   9篇
  2008年   8篇
  2007年   6篇
  2006年   3篇
  2005年   1篇
  2004年   2篇
  2003年   4篇
  2002年   2篇
  2001年   5篇
  2000年   2篇
  1999年   5篇
  1998年   3篇
  1997年   1篇
  1996年   3篇
  1995年   2篇
  1994年   1篇
  1993年   1篇
  1990年   1篇
  1989年   1篇
  1980年   1篇
排序方式: 共有117条查询结果,搜索用时 15 毫秒
1.
Felix 01 (F01) is a bacteriophage originally isolated by Felix and Callow which lyses almost all Salmonella strains and has been widely used as a diagnostic test for this genus. Molecular information about this phage is entirely lacking. In the present study, the DNA of the phage was found to be a double-stranded linear molecule of about 80 kb. 11.5 kb has been sequenced and in this region A + T content is 60%. There are relatively few restriction endonuclease cleavage sites in the native genome and clones show this is due to their absence rather than modification. A restriction map of the genome has been constructed. The ends of the molecule cannot be ligated although they contain 5' phosphates. At least 60% of the genome must encode proteins. In the sequenced portion, many open reading frames exist and these are tightly packed together. These have been examined for homology to published proteins but only 1 to 17 shows similarity to known proteins. F01 is therefore the prototype of a new phage family. On the basis of restriction sites, codon usage and the distribution of nonsense codons in the unused reading frames, a strong case can be made for natural selection that reacts to mRNA structure and function.  相似文献   
2.
Although concentrated animal feeding operations constantly generate physiologically active steroidal hormones, little is known of their environmental fate. Estrogen and testosterone concentrations in groundwater and their distribution in sediments below a dairy-farm wastewater lagoon were therefore determined and compared to a reference site located upgradient of the farm. Forward simulations of flow as well as estrogen and testosterone transport were conducted based on data from the sediment profile obtained during drilling of a monitoring well belowthe dairy-farm waste lagoon. Testosterone and estrogen were detected in sediments to depths of 45 and 32 m, respectively. Groundwater samples were directly impacted by the dairy farm, as evidenced by elevated concentrations of nitrate, chloride, testosterone, and estrogen as compared to the reference site. Modeling potential transport of hormones in the vadose zone via advection, dispersion, and sorption could not explain the depths at which estrogen and testosterone were found, suggesting that other transport mechanisms influence hormone transport under field conditions. These mechanisms may involve interactions between hormones and manure as well as preferential flow paths, leading to enhanced transport rates. These types of interactions should be further investigated to understand the processes regulating hormone transport in the subsurface environment and parametrized to forecast long-term fate and transport of steroidal hormones.  相似文献   
3.
4.
    
Provision of automated support for planning protocol-directed therapy requires a computer program to take as input clinical data stored in an electronic patient-record system and to generate as output recommendations for therapeutic interventions and laboratory testing that are defined by applicable protocols. This paper presents a synthesis of research carried out at Stanford University to model the therapy-planning task and to demonstrate a component-based architecture for building protocol-based decision-support systems. We have constructed general-purpose software components that (1) interpret abstract protocol specifications to construct appropriate patient-specific treatment plans; (2) infer from time-stamped patient data higher-level, interval-based, abstract concepts; (3) perform time-oriented queries on a time-oriented patient database; and (4) allow acquisition and maintenance of protocol knowledge in a manner that facilitates efficient processing both by humans and by computers. We have implemented these components in a computer system known as EON. Each of the components has been developed, evaluated, and reported independently. We have evaluated the integration of the components as a composite architecture by implementing T-HELPER, a computer-based patient-record system that uses EON to offer advice regarding the management of patients who are following clinical trial protocols for AIDS or HIV infection. A test of the reuse of the software components in a different clinical domain demonstrated rapid development of a prototype application to support protocol-based care of patients who have breast cancer.  相似文献   
5.
    
Behavioral neuroscience underwent a technology-driven revolution with the emergence of machine-vision and machine-learning technologies. These technological advances facilitated the generation of high-resolution, high-throughput capture and analysis of complex behaviors. Therefore, behavioral neuroscience is becoming a data-rich field. While behavioral researchers use advanced computational tools to analyze the resulting datasets, the search for robust and standardized analysis tools is still ongoing. At the same time, the field of genomics exploded with a plethora of technologies which enabled the generation of massive datasets. This growth of genomics data drove the emergence of powerful computational approaches to analyze these data. Here, we discuss the composition of a large behavioral dataset, and the differences and similarities between behavioral and genomics data. We then give examples of genomics-related tools that might be of use for behavioral analysis and discuss concepts that might emerge when considering the two fields together.  相似文献   
6.
Immune electron microscopy (IEM) is one of the fastest and most sensitive methods for the detection and diagnosis of viruses. This technique is based on formation of immune complexes of the virus with its corresponding antibody. In IEM optimal precipitation depends on a correct ratio, and there is a prozone effect. These problems can be overcome by using the solid-phase immune electron microscopic (SPIEM) technique. In this technique the antibody is attached to a particle which is used for 'fishing' the virus to be examined out of the suspension. After low speed centrifugation the preparation is treated either for observation in the transmission electron microscope or in the scanning electron microscope. In 'positive' samples the virus is seen attached to the surface of the particle. We report here results with S. aureus as the solid phase for the detection of Sindbis virus. The anti-Sindbis gamma globulins are attached to the bacteria by means of protein A present on their surface.  相似文献   
7.
In our previous work, we introduced a computational architecture that effectively supports the tasks of continuous monitoring and of aggregation querying of complex domain meaningful time-oriented concepts and patterns (temporal abstractions), in environments featuring large volumes of continuously arriving and accumulating time-oriented raw data. Examples include provision of decision support in clinical medicine, making financial decisions, detecting anomalies and potential threats in communication networks, integrating intelligence information from multiple sources, etc. In this paper, we describe the general, domain-independent but task-specific problem-solving method underling our computational architecture, which we refer to as incremental knowledge-based temporal abstraction (IKBTA). The IKBTA method incrementally computes temporal abstractions by maintaining persistence and validity of continuously computed temporal abstractions from arriving time-stamped data. We focus on the computational framework underlying our reasoning method, provide well-defined semantic and knowledge requirements for incremental inference, which utilizes a logical model of time, data, and high-level abstract concepts, and provide a detailed analysis of the computational complexity of our approach.  相似文献   
8.
Model differencing is an important activity in model-based development processes. Differences need to be detected, analyzed, and understood to evolve systems and explore alternatives. Two distinct approaches have been studied in the literature: syntactic differencing, which compares the concrete or abstract syntax of models, and semantic differencing, which compares models in terms of their meaning. Syntactic differencing identifies change operations that transform the syntactical representation of one model to the syntactical representation of the other. However, it does not explain their impact on the meaning of the model. Semantic model differencing is independent of syntactic changes and presents differences as elements in the semantics of one model but not the other. However, it does not reveal the syntactic changes causing these semantic differences. We define Diffuse, a language-independent, abstract framework, which relates syntactic change operations and semantic difference witnesses. We formalize fundamental relations of necessary, exhibiting, and sufficient sets of change operations and analyze their properties. We further demonstrate concrete instances of the Diffuse framework for three different popular modeling languages, namely class diagrams, activity diagrams, and feature models. The Diffuse framework provides a novel foundation for combining syntactic and semantic differencing.  相似文献   
9.
Acta Informatica - Reactive synthesis for the GR(1) fragment of LTL has been implemented and studied in many works. In this work we present and evaluate a list of heuristics to potentially reduce...  相似文献   
10.
Whereas explicit measures of the self-concept typically demonstrate a negative bias in depressed individuals, implicit measures such as the Implicit Association Test (IAT), revealed an opposite, positive bias. To address this inconsistent pattern, the authors examined, using a novel paradigm, mental set maintenance (i.e., the difficulty of maintaining active a required mental set) and set operation (the efficiency of executing the mental set while it is maintained). Dysphoric (N = 33) and nondysphoric (N = 30) participants alternated between an IAT focusing on self reference and a matched neutral task. Nondysphorics had greater difficulty in maintaining a negative self reference task compared to a neutral task. Conversely, dysphorics did not exhibit such difficulty, and they maintained a negative self-reference task more easily than nondysphorics. No group differences were evinced in smoothness of set operation. These results suggest that the shield protecting nondysphorics from maintaining negative mental sets is absent in dysphorics. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号