全文获取类型
收费全文 | 46篇 |
免费 | 1篇 |
专业分类
化学工业 | 6篇 |
建筑科学 | 1篇 |
能源动力 | 1篇 |
轻工业 | 4篇 |
无线电 | 5篇 |
一般工业技术 | 10篇 |
冶金工业 | 11篇 |
自动化技术 | 9篇 |
出版年
2022年 | 1篇 |
2021年 | 1篇 |
2020年 | 1篇 |
2018年 | 1篇 |
2014年 | 1篇 |
2013年 | 1篇 |
2012年 | 1篇 |
2011年 | 4篇 |
2010年 | 2篇 |
2009年 | 2篇 |
2007年 | 1篇 |
2006年 | 4篇 |
2005年 | 1篇 |
2004年 | 2篇 |
2003年 | 2篇 |
2001年 | 2篇 |
2000年 | 1篇 |
1999年 | 2篇 |
1998年 | 2篇 |
1997年 | 2篇 |
1996年 | 3篇 |
1994年 | 3篇 |
1993年 | 1篇 |
1986年 | 1篇 |
1983年 | 2篇 |
1977年 | 1篇 |
1976年 | 2篇 |
排序方式: 共有47条查询结果,搜索用时 15 毫秒
1.
The multimod application framework: a rapid application development tool for computer aided medicine 总被引:1,自引:0,他引:1
Viceconti M Zannoni C Testi D Petrone M Perticoni S Quadrani P Taddei F Imboden S Clapworthy G 《Computer methods and programs in biomedicine》2007,85(2):138-151
This paper describes a new application framework (OpenMAF) for rapid development of multimodal applications in computer-aided medicine. MAF applications are multimodal in data, in representation, and in interaction. The framework supports almost any type of biomedical data, including DICOM datasets, motion-capture recordings, or data from computer simulations (e.g. finite element modeling). The interactive visualization approach (multimodal display) helps the user interpret complex datasets, providing multiple representations of the same data. In addition, the framework allows multimodal interaction by supporting the simultaneous use of different input-output devices like 3D trackers, stereoscopic displays, haptics hardware and speech recognition/synthesis systems. The Framework has been designed to run smoothly even on limited power computers, but it can take advantage of all hardware capabilities. The Framework is based on a collection of portable libraries and it can be compiled on any platform that supports OpenGL, including Windows, MacOS X and any flavor of Unix/linux. 相似文献
2.
Testi D Quadrani P Petrone M Zannoni C Fontana F Viceconti M 《Computer methods and programs in biomedicine》2004,75(3):213-220
This work is aimed at developing an innovative simulation environment supporting and improving the design of standard joint implants (JPD integrated design environment (JIDE)). The conceptual workflow starts from the design of a new implant, by using conventional CAD programmes and completes with the generation of a report that summarises the goodness for a new implant against a database of human bone anatomies. For each dataset in the database, the JPD application calculates a set of quantitative indicators that will support the designer in the evaluation of its design on a statistical basis. The resulting system is thus directed to prostheses manufacturers and addresses a market segment that appears to have a steady growth in the future. 相似文献
3.
Adnan Ghribi Andrea Tartari Eric Bréelle Jean-Christophe Hamilton Silvia Galli Massimo Gervasi Michel Piat Sebastiano Spinelli Mario Zannoni 《Journal of Infrared, Millimeter and Terahertz Waves》2010,31(1):88-99
In order to study an original detection architecture for future cosmology experiments based on wide band adding interferometry,
we have tested a single baseline bench instrument based on commercial components. The instrument has been characterized in
the laboratory with a wide band power detection setup. A method which allows us to reconstruct the complete transfer function
of the interferometer has been developed and validated with measurements. This scheme is useful to propagate the spurious
effects of each component till the output of the detector. 相似文献
4.
M Viceconti C Zannoni D Testi A Cappello 《Computer methods and programs in biomedicine》1999,59(3):159-166
In modelling applications such as custom-made implants design is useful to have a surface representation of the anatomy of bones rather than the voxel-based representation generated by tomography systems. A voxel-to-surface conversion process is usually done by a 2D segmentation of the images stack. However, other methods allow a direct 3D segmentation of the CT or MRI data set. In the present work, two of these methods, namely the Standard Marching Cube (SMC) and the Discretized Marching Cube (DMC) algorithms, were compared in terms of local accuracy when used to reconstruct the geometry of a human femur. The SMC method was found to be more accurate than the DMC method. The SMC method was capable of reconstructing the inner and outer geometry of a human femur with a peak error lower than 0.9 mm and an average error comparable to the pixel size (0.3 mm). However, the large number of triangles generated by the algorithm may limit its adoption in many modelling applications. The peak error of the DMC algorithm was 1.6 mm but it produced approximately 70% less triangles than the SMC method. From the results of this study, it may be concluded that three dimensional segmentation algorithms are useful not only in visualisation applications but also in the creation of geometry models. 相似文献
5.
6.
7.
8.
Optimal CT scanning plan for long-bone 3-D reconstruction 总被引:1,自引:0,他引:1
Digital computed tomographic (CT) data are widely used in three-dimensional (3-D) construction of bone geometry and density features for 3-D modelling purposes. During in vivo CT data acquisition the number of scans must be limited in order to protect patients from the risks related to X-ray absorption. The aim of this work is to automatically define, given a finite number of CT slices, the scanning plan which returns the optimal 3-D reconstruction of a bone segment from in vivo acquired CT images. An optimization algorithm based on a Discard-Insert-Exchange technique has been developed. In the proposed method the optimal scanning sequence is searched by minimizing the overall reconstruction error of a two-dimensional (2-D) prescanning image: an anterior-posterior (AP) X-ray projection of the bone segment. This approach has been validated in vitro on 3 different femurs. The 3-D reconstruction errors obtained through the optimization of the scanning plan on the 3-D prescanning images and on the corresponding 3-D data sets have been compared. 2-D and 3-D data sets have been reconstructed by linear interpolation along the longitudinal axis. Results show that direct 3-D optimization yields root mean square reconstruction errors which are only 4%-7% lower than the 2-D-optimized plan, thus proving that 2-D-optimization provides a good suboptimal scanning plan for 3-D reconstruction. Further on, 3-D reconstruction errors given by the optimized scanning plan and a standard radiological protocol for long bones have been compared. Results show that the optimized plan yields 20%-50% lower 3-D reconstruction errors 相似文献
9.
Traditional software engineering dictates the use of modular and structured programming and top-down stepwise refinement techniques that reduce the amount of variability arising in the development process by establishing standard procedures to be followed while writing software. This focusing leads to reduced variability in the resulting products, due to the use of standardized constructs. Genetic programming (GP) performs heuristic search in the space of programs. Programs produced through the GP paradigm emerge as the result of simulated evolution and are built through a bottom-up process, incrementally augmenting their functionality until a satisfactory level of performance is reached. Can we automatically extract knowledge from the GP programming process that can be useful to focus the search and reduce product variability, thus leading to a more effective use of the available resources? An answer to this question is investigated with the aid of cultural algorithms. A new system, cultural algorithms with genetic programming (CAGP), is presented. The system has two levels. The first is the pool of genetic programs (population level), and the second is a knowledge repository (belief set) that is built during the GP run and is used to guide the search process. The microevolution within the population brings about potentially meaningful characteristics of the programs for the achievement of the given task, such as properties exhibited by the best performers in the population. CAGP extracts these features and represents them as the set of the current beliefs. Beliefs correspond to constraints that all the genetic operators and programs must follow. Interaction between the two levels occurs in one direction through the extraction process and, in the other, through the modulation of an individual's program parameters according to which, and how many, of the constraints it follows. CAGP is applied to solve an instance of the symbolic regression problem, in which a function of one variable needs to be discovered. The results of the experiments show an overall improvement on the average performance of CAGP over GP alone and a significant reduction of the complexity of the produced solution. Moreover, the execution time required by CAGP is comparable with the time required by GP alone. 相似文献
10.