首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   264篇
  免费   3篇
电工技术   7篇
化学工业   54篇
建筑科学   8篇
矿业工程   1篇
能源动力   17篇
轻工业   26篇
水利工程   2篇
无线电   29篇
一般工业技术   47篇
冶金工业   12篇
原子能技术   9篇
自动化技术   55篇
  2023年   3篇
  2022年   2篇
  2021年   8篇
  2020年   3篇
  2019年   8篇
  2018年   7篇
  2017年   10篇
  2016年   10篇
  2015年   2篇
  2014年   16篇
  2013年   14篇
  2012年   15篇
  2011年   19篇
  2010年   18篇
  2009年   20篇
  2008年   21篇
  2007年   17篇
  2006年   16篇
  2005年   8篇
  2004年   9篇
  2003年   8篇
  2002年   7篇
  2001年   3篇
  2000年   3篇
  1999年   4篇
  1998年   4篇
  1997年   3篇
  1996年   3篇
  1995年   1篇
  1992年   1篇
  1984年   1篇
  1977年   1篇
  1973年   1篇
  1968年   1篇
排序方式: 共有267条查询结果,搜索用时 15 毫秒
101.
In this paper we describe the evaluation of a personalised information system for patients with cancer. Our system dynamically generates hypertext pages that explain treatments, diseases, measurements etc related to the patient's condition, using information in the patient's medical record as the basis for the tailoring. We describe results of a controlled trial comparing this system with a nonpersonalised one. The results of the trial slow significant results concerning the patients' preferences for personalised information. We discuss the implications of our evaluation and results for the development and evaluation of future personalised systems, and adaptive hypertext systems in particular.  相似文献   
102.
The relationship between written and spoken words is convoluted in languages with a deep orthography such as English and therefore it is difficult to devise explicit rules for generating the pronunciations for unseen words. Pronunciation by analogy (PbA) is a data-driven method of constructing pronunciations for novel words from concatenated segments of known words and their pronunciations. PbA performs relatively well with English and outperforms several other proposed methods. However, the method inherently generates several candidate pronunciations and its performance depends critically on a good scoring function to choose the best one of them.Previous PbA algorithms have used several different scoring heuristics such as the product of the frequencies of the component pronunciations of the segments, or the number of different segmentations that yield the same pronunciation, and different combinations of these methods, to evaluate the candidate pronunciations. In this article, we instead propose to use a probabilistically justified scoring rule. We show that this principled approach alone yields better accuracy than any previously published PbA algorithm. Furthermore, combined with certain ad hoc modifications motivated by earlier algorithms, the performance can in some cases be further increased.  相似文献   
103.
Forest canopy cover (C) is needed in forest area monitoring and for many ecological models. Airborne scanning lidar sensors can produce fairly accurate C estimates even without field training data. However, optical satellite images are more cost-efficient for large area inventories. Our objective was to use airborne lidar data to obtain accurate estimates of C for a set of sample plots in a boreal forest and to generalize C for a large area using a satellite image. The normalized difference vegetation index (NDVI) and reduced simple ratio (RSR) were calculated from the satellite image and used as predictors in the regressions. RSR, which combines information from the red, near-infrared, and shortwave infrared bands, provided the best performance in terms of absolute root mean square error (RMSE) (7.3%) in the training data. NDVI produced a markedly larger RMSE (10.0%). However, in an independent validation data set, RMSE increased (13.0–17.1%) because the systematic sample of validation plots contained more variation than the training plots. Our results are better than those reported earlier, which is probably explained by more consistent C estimates derived from the lidar. Our approach provides an efficient method for creating C maps for large areas.  相似文献   
104.
In this article, a primal-dual interior-point algorithm for semidefinite programming that can be used for analysing e.g. polytopic linear differential inclusions is tailored in order to be more computationally efficient. The key to the speedup is to allow for inexact search directions in the interior-point algorithm. These are obtained by aborting an iterative solver for computing the search directions prior to convergence. A convergence proof for the algorithm is given. Two different preconditioners for the iterative solver are proposed. The speedup is in many cases more than an order of magnitude. Moreover, the proposed algorithm can be used to analyse much larger problems as compared to what is possible with off-the-shelf interior-point solvers.  相似文献   
105.
In the current drug discovery process, the synthesis of compound libraries is separated from biological screenings both conceptually and technologically. One of the reasons is that parallel on-chip high-throughput purification of synthesized compounds is still a major challenge. Here, on-chip miniaturized high-throughput liquid–liquid extraction in volumes down to 150 nL with efficiency comparable to or better than large-scale extraction utilizing separation funnels is demonstrated. The method is based on automated and programmable merging of arrays of aqueous nanoliter droplets with organic droplets. Multi-step extraction performed simultaneously or with changing conditions as well as handling of femtomoles of compounds are demonstrated. In addition, the extraction efficiency is analyzed with a fast optical readout as well as matrix-assisted laser desorption ionization-mass spectrometry on-chip detection. The new massively parallel and miniaturized purification method adds another important tool to the chemBIOS concept combining chemical combinatorial synthesis with biological screenings on the same miniaturized droplet microarray platform, which will be essential to accelerate drug discovery.  相似文献   
106.
The effect of microstructure on the creep properties and the failure mechanism of SnAgCu solder joints was studied. Single overlap shear specimens made of FR-4 printed circuit boards (PCBs) with organic solderability preservative (OSP), NiAu, and immersion Sn surface finish were reflow-soldered with hypoeutectic, eutectic, and hypereutectic SnAgCu solder paste. Creep tests of the solder joints were performed at 85°C and 105°C under constant load. The effect of microstructure on the creep behavior of the joints was studied by examining the fracture surfaces and cross-sectional samples of the tested joints. Results show that the intermetallic compound at the interface between the PCB and solder affects the fracture behavior of SnAgCu solder joints, thus creating a significant difference in the creep properties of solder joints on different surface finishes. Composition of SnAgCu solder was also found to affect the creep properties of the joints.  相似文献   
107.
Tumor targeting pharmaceuticals will play a crucial role in future pharma pipelines. The targeted thorium conjugate (TTC) therapeutic platform could provide real benefit to patients, whereby targeting moieties like monoclonal antibodies are radiolabelled with the alpha-emitting radionuclide thorium-227 (227Th, t1/2?=?18.7?days). A potential problem could be the accumulation of the long-lived daughter nuclide radium-223 (223Ra, t1/2?=?11.4?days) in the drug product during manufacturing and distribution. Therefore, the level of 223Ra must be standardized before administration to the patient. The focus in this study has been the removal of 223Ra, as the other progenies will have a very limited stay in the formulation. In this study, the purification of TTCs labeled with decayed 227Th has been explored. Columns packed with a strong cation exchange resin have been used to sequester 223Ra. The separation of TTC from 223Ra has been evaluated as influenced by both formulation and process parameters with a design of experiments (DOE) study; including citrate or acetate buffer, pH, buffer concentration, presence or absence of pABA?+?EDTA, resin amount and sodium chloride concentration. The aim was to achieve a separation with high sorption of 223Ra and accompanying low TTC sorption. The results were analyzed by multivariate analysis. Four regression models of TTC and 223Ra sorption from citrate and acetate buffered formulations were developed. The predictive accuracy of sorption in the four statistical models was given by standard deviations and confidence intervals. The TTC sorption in citrate and acetate buffered formulations was affected by the identical variables and the variation in TTC sorption was comparable for the two models. However, the DOE variables had a significantly stronger impact on the 223Ra sorption in citrate buffered formulations than the 223Ra sorption in acetate buffer. An optimal separation with a TTC sorption below 25% and 223Ra sorption above 90% can be achieved in both citrate and acetate buffered formulations. Stability studies of radiochemical purity (RCP) indicated that the measured 227Th values may be partly due to free 227Th and not TTC, but the results indicate that TTC stability may be controlled by optimizing formulation parameters. Hence, the sorption data and the regression models presented must be reviewed and further explored with regard to what is known about the stability of the TTC in the different buffered formulations.  相似文献   
108.
During the last 20 years, video games have become very popular and widely adopted in our society. However, despite the growth on video game industry, there is a lack of interoperability that allow developers to interchange their information freely and to form stronger partnerships. In this paper we present the Video Game Ontology (VGO), a model for enabling interoperability among video games and enhancing data analysis of gameplay information. We describe the creation process of the ontology, the ontology conceptualization and its evaluation. In addition, we demonstrate the applicability of the Video Game Ontology in action with three example games that take advantage of the created ontology. Also, we demonstrate the use of the VGO in enabling interoperability among the example games.  相似文献   
109.
The memories used for embedded microprocessor devices consume a large portion of the system’s power. The power dissipation of the instruction memory can be reduced by using code compression methods, which may require the use of variable length instruction formats in the processor. The power-efficient design of variable length instruction fetch and decode is challenging for static multiple-issue processors, which aim for low power consumption on embedded platforms. The memory-side power savings using compression are easily lost on inefficient fetch unit design. We propose an implementation for instruction template-based compression and two instruction fetch alternatives for variable length instruction encoding on transport triggered architecture, a static multiple-issue exposed data path architecture. With applications from the CHStone benchmark suite, the compression approach reaches an average compression ratio of 44% at best. We show that the variable length fetch designs reduce the number of memory accesses and often allow the use of a smaller memory component. The proposed compression scheme reduced the energy consumption of synthesized benchmark processors by 15% and area by 33% on average.  相似文献   
110.
In this paper, an image fusion algorithm is proposed for a multi-aperture camera. Such camera is a feasible alternative to traditional Bayer filter camera in terms of image quality, camera size and camera features. The camera consists of several camera units, each having dedicated optics and color filter. The main challenge of a multi-aperture camera arises from the fact that each camera unit has a slightly different viewpoint. Our image fusion algorithm corrects the parallax error between the sub-images using a disparity map, which is estimated from the single-spectral images. We improve the disparity estimation by combining matching costs over multiple views using trifocal tensors. Images are matched using two alternative matching costs, mutual information and Census transform. We also compare two different disparity estimation methods, graph cuts and semi-global matching. The results show that the overall quality of the fused images is near the reference images.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号