首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   108篇
  免费   7篇
  国内免费   2篇
电工技术   1篇
化学工业   12篇
金属工艺   3篇
机械仪表   1篇
建筑科学   4篇
矿业工程   1篇
能源动力   4篇
轻工业   5篇
水利工程   1篇
石油天然气   4篇
无线电   8篇
一般工业技术   22篇
冶金工业   19篇
原子能技术   2篇
自动化技术   30篇
  2021年   3篇
  2020年   1篇
  2018年   5篇
  2017年   5篇
  2016年   11篇
  2015年   5篇
  2014年   2篇
  2013年   5篇
  2012年   6篇
  2011年   10篇
  2010年   4篇
  2009年   5篇
  2008年   8篇
  2007年   8篇
  2006年   3篇
  2005年   2篇
  2004年   4篇
  2003年   2篇
  2002年   3篇
  2000年   1篇
  1998年   5篇
  1997年   2篇
  1996年   2篇
  1995年   3篇
  1994年   1篇
  1993年   2篇
  1992年   3篇
  1991年   1篇
  1987年   1篇
  1982年   2篇
  1976年   1篇
  1975年   1篇
排序方式: 共有117条查询结果,搜索用时 437 毫秒
91.
A software product line (SPL) is a family of programs that share assets from a common code base. The programs of an SPL can be distinguished in terms of features, which represent units of program functionality that satisfy stakeholders’ requirements. The features of an SPL can be bound either statically at program compile time or dynamically at run time. Both binding times are used in SPL development and have different advantages. For example, dynamic binding provides high flexibility whereas static binding supports fine-grained customizability without any impact on performance (e.g., for use on embedded systems). However, contemporary techniques for implementing SPLs force a programmer to choose the binding time already when designing an SPL and to mix different implementation techniques when multiple binding times are needed. We present an approach that integrates static and dynamic feature binding seamlessly. It allows a programmer to implement an SPL once and to decide per feature at deployment time whether it should be bound statically or dynamically. Dynamic binding usually introduces an overhead regarding resource consumption and performance. We reduce this overhead by statically merging features that are used together into dynamic binding units. A program can be configured at run time by composing binding units on demand. We use feature models to ensure that only valid feature combinations can be selected at compile and at run time. We provide a compiler and evaluate our approach on the basis of two non-trivial SPLs.  相似文献   
92.
Variant-rich software systems offer a large degree of customization, allowing users to configure the target system according to their preferences and needs. Facing high degrees of variability, these systems often employ variability models to explicitly capture user-configurable features (e.g., systems options) and the constraints they impose. The explicit representation of features allows them to be referenced in different variation points across different artifacts, enabling the latter to vary according to specific feature selections. In such settings, the evolution of variability models interplays with the evolution of related artifacts, requiring the two to evolve together, or coevolve. Interestingly, little is known about how such coevolution occurs in real-world systems, as existing research has focused mostly on variability evolution as it happens in variability models only. Furthermore, existing techniques supporting variability evolution are usually validated with randomly-generated variability models or evolution scenarios that do not stem from practice. As the community lacks a deep understanding of how variability evolution occurs in real-world systems and how it relates to the evolution of different kinds of software artifacts, it is not surprising that industry reports existing tools and solutions ineffective, as they do not handle the complexity found in practice. Attempting to mitigate this overall lack of knowledge and to support tool builders with insights on how variability models coevolve with other artifact types, we study a large and complex real-world variant-rich software system: the Linux kernel. Specifically, we extract variability-coevolution patterns capturing changes in the variability model of the Linux kernel with subsequent changes in Makefiles and C source code. From the analysis of the patterns, we report on findings concerning evolution principles found in the kernel, and we reveal deficiencies in existing tools and theory when handling changes captured by our patterns.  相似文献   
93.
The critical cooling rate and fluorescence properties of lithium (Li) disilicate glasses and glass–ceramics, doped with 2.0 wt% CeO2 and with up to 0.7 wt% V2O5 and 0.3 wt% MnO2 added as colorants, were investigated. The critical cooling rates, R c, of glass melts were determined using differential thermal analysis and were found to be dependent on the relative concentrations of V2O5 and MnO2, decreasing from 25±3° to 16±3°C/min. Annealed glasses were heat treated first to 670°C, and then to 850°C to form Li metasilicate and Li disilicate glass–ceramics, respectively. The fluorescence intensities of the Ce-doped glasses and glass–ceramics decrease by a factor of 100 with the addition of the transition metal oxides. This optical quenching effect is explained by the association of the Ce3+ ions with the transition metal ions in the residual glassy phase of the glass–ceramics.  相似文献   
94.
We apply a genetic algorithm to optimize the pump cavity of a complex miniaturized diode-pumped laser to find a balance between the efficient energy transfer of the pump light and the homogeneous illumination of the laser crystal. These two points are in contradiction to each other, whereby a complex optimization situation is given. The genome determines the geometry of the internal optical elements of the pump cavity in which a laser rod is placed. After optimization of the internal optical elements, a homogeneous illumination over the crystal length and a coupling efficiency of 59% were achieved. The results showed that genetic algorithms can find solutions and blueprints for laser pump cavities of consistent quality.  相似文献   
95.
96.
Preface     
Computing and Visualization in Science -  相似文献   
97.
Programming experience is an important confounding parameter in controlled experiments regarding program comprehension. In literature, ways to measure or control programming experience vary. Often, researchers neglect it or do not specify how they controlled for it. We set out to find a well-defined understanding of programming experience and a way to measure it. From published comprehension experiments, we extracted questions that assess programming experience. In a controlled experiment, we compare the answers of computer-science students to these questions with their performance in solving program-comprehension tasks. We found that self estimation seems to be a reliable way to measure programming experience. Furthermore, we applied exploratory and confirmatory factor analyses to extract and evaluate a model of programming experience. With our analysis, we initiate a path toward validly and reliably measuring and describing programming experience to better understand and control its influence in program-comprehension experiments.  相似文献   
98.
MapReduce frameworks allow programmers to write distributed, data‐parallel programs that operate on multisets. These frameworks offer considerable flexibility to support various kinds of programs and data. To understand the essence of the programming model better and to provide a rigorous foundation for optimizations, we present an abstract, functional model of MapReduce along with a number of customization options. We demonstrate that the MapReduce programming model can also represent programs that operate on lists, which differ from multisets in that the order of elements matters. Along with the functional model, we offer a cost model that allows programmers to estimate and compare the performance of MapReduce programs. Based on the cost model, we introduce two transformation rules aiming at performance optimization of MapReduce programs, which also demonstrates the usefulness of our model. In an exploratory study, we assess the impact of applying these rules to two applications. The functional model and the cost model provide insights at a proper level of abstraction into why the optimization works. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
99.
This paper describes a development and laboratory evaluation of the light-based high-resolution target movement monitor which can be used to measure convergence at underground excavations with submillimeter accuracy. The monitor is based on unique measurement technology which was developed at the University of Missouri-Rolla. This system has the potential for high accuracy detection and monitoring of positional changes (as small as 0.1?mm with the current laboratory implementation) presented by many types of targets located in close proximity to or at far distances from the monitor. The sensitivity of the system to camera resolution and the error analysis as a function of the laser incident angle are described. The system utilizes custom computer processing, a high-resolution camera, and laser light to measure the distance to a target accurately in one dimension, but it can also be used for performing two-dimensional surface profiling or analyzing three-dimensional movement of the target. The ability of this optical system to measure ground movement with submillimeter accuracy will allow for monitoring ground convergence in areas of high traffic or which are inaccessible for the installation of traditional ground movement sensing devices.  相似文献   
100.
Calculating microstructures for technical materials is an ambitious task which not only implies different length scales but also the complex thermodynamic properties of multicomponent and multiphase alloys. We report some of the recent progress in simulating microstructure evolution in multicomponent steels using the multiphase‐field software MICRESS®. Several applications are reviewed in order to demonstrate the current status of applied phase‐field techniques.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号