首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   192篇
  免费   4篇
  国内免费   1篇
电工技术   1篇
化学工业   38篇
金属工艺   3篇
机械仪表   2篇
建筑科学   17篇
能源动力   3篇
轻工业   11篇
石油天然气   3篇
无线电   18篇
一般工业技术   27篇
冶金工业   18篇
原子能技术   2篇
自动化技术   54篇
  2023年   3篇
  2022年   6篇
  2021年   9篇
  2020年   3篇
  2019年   2篇
  2018年   5篇
  2017年   2篇
  2016年   6篇
  2015年   7篇
  2014年   9篇
  2013年   20篇
  2012年   9篇
  2011年   13篇
  2010年   11篇
  2009年   9篇
  2008年   16篇
  2007年   6篇
  2006年   7篇
  2005年   2篇
  2004年   7篇
  2003年   4篇
  2002年   4篇
  2001年   3篇
  2000年   2篇
  1999年   1篇
  1998年   5篇
  1997年   11篇
  1996年   3篇
  1995年   2篇
  1994年   1篇
  1993年   1篇
  1992年   1篇
  1989年   1篇
  1985年   1篇
  1980年   1篇
  1974年   1篇
  1971年   1篇
  1969年   1篇
  1967年   1篇
排序方式: 共有197条查询结果,搜索用时 15 毫秒
1.
We show how to compute the smallest rectangle that can enclose any polygon, from a given set of polygons, in nearly linear time; we also present a PTAS for the problem, as well as a linear-time algorithm for the case when the polygons are rectangles themselves. We prove that finding a smallest convex polygon that encloses any of the given polygons is NP-hard, and give a PTAS for minimizing the perimeter of the convex enclosure. We also give efficient algorithms to find the smallest rectangle simultaneously enclosing a given pair of convex polygons.  相似文献   
2.
There has been a growing interest in applying human computation – particularly crowdsourcing techniques – to assist in the solution of multimedia, image processing, and computer vision problems which are still too difficult to solve using fully automatic algorithms, and yet relatively easy for humans. In this paper we focus on a specific problem – object segmentation within color images – and compare different solutions which combine color image segmentation algorithms with human efforts, either in the form of an explicit interactive segmentation task or through an implicit collection of valuable human traces with a game. We use Click’n’Cut, a friendly, web-based, interactive segmentation tool that allows segmentation tasks to be assigned to many users, and Ask’nSeek, a game with a purpose designed for object detection and segmentation. The two main contributions of this paper are: (i) We use the results of Click’n’Cut campaigns with different groups of users to examine and quantify the crowdsourcing loss incurred when an interactive segmentation task is assigned to paid crowd-workers, comparing their results to the ones obtained when computer vision experts are asked to perform the same tasks. (ii) Since interactive segmentation tasks are inherently tedious and prone to fatigue, we compare the quality of the results obtained with Click’n’Cut with the ones obtained using a (fun, interactive, and potentially less tedious) game designed for the same purpose. We call this contribution the assessment of the gamification loss, since it refers to how much quality of segmentation results may be lost when we switch to a game-based approach to the same task. We demonstrate that the crowdsourcing loss is significant when using all the data points from workers, but decreases substantially (and becomes comparable to the quality of expert users performing similar tasks) after performing a modest amount of data analysis and filtering out of users whose data are clearly not useful. We also show that – on the other hand – the gamification loss is significantly more severe: the quality of the results drops roughly by half when switching from a focused (yet tedious) task to a more fun and relaxed game environment.  相似文献   
3.
We address the self-calibration of a smooth generic central camera from only two dense rotational flows produced by rotations of the camera about two unknown linearly independent axes passing through the camera centre. We give a closed-form theoretical solution to this problem, and we prove that it can be solved exactly up to a linear orthogonal transformation ambiguity. Using the theoretical results, we propose an algorithm for the self-calibration of a generic central camera from two rotational flows.  相似文献   
4.
This paper presents a novel technique for three-dimensional (3D) human motion capture using a set of two non-calibrated cameras. The user’s five extremities (head, hands and feet) are extracted, labeled and tracked after silhouette segmentation. As they are the minimal number of points that can be used in order to enable whole body gestural interaction, we will henceforth refer to these features as crucial points. Features are subsequently labelled using 3D triangulation and inter-image tracking. The crucial point candidates are defined as the local maxima of the geodesic distance with respect to the center of gravity of the actor region that lie on the silhouette boundary. Due to its low computational complexity, the system can run at real-time paces on standard personal computers, with an average error rate range between 4% and 9% in realistic situations, depending on the context and segmentation quality.
Benoit MacqEmail:
  相似文献   
5.
Numerical modelling of porous flow in a low‐permeability matrix with high‐permeability inclusions is a challenging task because the large ratio of permeabilities ill‐conditions the finite element system of equations. We propose a coupled model where Darcy flow is used for the porous matrix and potential flow is used for the inclusions. We discuss appropriate interface conditions in detail and show that the head drop in the inclusions can be prescribed in a very simple way. Algorithmic aspects are treated in full detail. Numerical examples show that this coupled approach precludes ill‐conditioning and is more efficient than heterogeneous Darcy flow. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   
6.
Metric Access Methods (MAMs) are indexing techniques which allow working in generic metric spaces. Therefore, MAMs are specially useful for Content-Based Image Retrieval systems based on features which use non L p norms as similarity measures. MAMs naturally allow the design of image browsers due to their inherent hierarchical structure. The Hierarchical Cellular Tree (HCT), a MAM-based indexing technique, provides the starting point of our work. In this paper, we describe some limitations detected in the original formulation of the HCT and propose some modifications to both the index building and the search algorithm. First, the covering radius, which is defined as the distance from the representative to the furthest element in a node, may not cover all the elements belonging to the node’s subtree. Therefore, we propose to redefine the covering radius as the distance from the representative to the furthest element in the node’s subtree. This new definition is essential to guarantee a correct construction of the HCT. Second, the proposed Progressive Query retrieval scheme can be redesigned to perform the nearest neighbor operation in a more efficient way. We propose a new retrieval scheme which takes advantage of the benefits of the search algorithm used in the index building. Furthermore, while the evaluation of the HCT in the original work was only subjective, we propose an objective evaluation based on two aspects which are crucial in any approximate search algorithm: the retrieval time and the retrieval accuracy. Finally, we illustrate the usefulness of the proposal by presenting some actual applications.  相似文献   
7.
Molecular screening for pathogenic mutations in sudden cardiac death (SCD)-related genes is common practice for SCD cases. However, test results may lead to uncertainty because of the identification of variants of unknown significance (VUS) occurring in up to 70% of total identified variants due to a lack of experimental studies. Genetic variants affecting potential splice site variants are among the most difficult to interpret. The aim of this study was to examine rare intronic variants identified in the exonic flanking sequence to meet two main objectives: first, to validate that canonical intronic variants produce aberrant splicing; second, to determine whether rare intronic variants predicted as VUS may affect the splicing product. To achieve these objectives, 28 heart samples of cases of SCD carrying rare intronic variants were studied. Samples were analyzed using 85 SCD genes in custom panel sequencing. Our results showed that rare intronic variants affecting the most canonical splice sites displayed in 100% of cases that they would affect the splicing product, possibly causing aberrant isoforms. However, 25% of these cases (1/4) showed normal splicing, contradicting the in silico results. On the contrary, in silico results predicted an effect in 0% of cases, and experimental results showed >20% (3/14) unpredicted aberrant splicing. Thus, deep intron variants are likely predicted to not have an effect, which, based on our results, might be an underestimation of their effect and, therefore, of their pathogenicity classification and family members’ follow-up.  相似文献   
8.
Acetobacter pasteurianus, a member of the Alphaproteobacteria, is an acetic acid-producing bacterium present on sugar-rich substrates such as such as fruits, flowers and vegetables and traditionally used in the production of fermented food. The preferred living habitat associated with acid conditions makes the structure of the bacterial cell wall interesting to study, due to expected uncommon features. We have used a combination of chemical, analytical and NMR spectroscopy approaches to define the complete structure of the core oligosaccharide from A. pasteurianus CIP103108 LPS. Interestingly, the core oligosaccharide displays a high concentration of negatively charged groups, structural features that might contribute to reinforcing the bacterial membrane.  相似文献   
9.
Universal Access in the Information Society - This study focuses on a case study developed at a higher education institution, which comprises developing a new virtual teaching unit (VTU) aimed at...  相似文献   
10.
Multipliers are routinely used for impact evaluation of private projects and public policies at the national and subnational levels. Oosterhaven and Stelder (J Reg Sci 42(3), 533–543, 2002) correctly pointed out the misuse of standard ‘gross’ multipliers and proposed the concept of ‘net’ multiplier as a solution to this bad practice. We prove their proposal is not well founded. We do so by showing that supporting theorems are faulty in enunciation and demonstration. The proofs are flawed due to an analytical error, but the theorems themselves cannot be salvaged as generic, non-curiosum counterexamples demonstrate. We also provide a general analytical framework for multipliers and, using it, we show that standard ‘gross’ multipliers are all that are needed within the interindustry model since they follow the causal logic of the economic model, are well-defined and independent of exogenous shocks, and are interpretable as predictors for change.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号