全文获取类型
收费全文 | 661篇 |
免费 | 13篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 6篇 |
化学工业 | 114篇 |
金属工艺 | 4篇 |
机械仪表 | 4篇 |
建筑科学 | 10篇 |
能源动力 | 34篇 |
轻工业 | 29篇 |
水利工程 | 4篇 |
无线电 | 106篇 |
一般工业技术 | 72篇 |
冶金工业 | 33篇 |
原子能技术 | 1篇 |
自动化技术 | 258篇 |
出版年
2024年 | 1篇 |
2023年 | 3篇 |
2022年 | 5篇 |
2021年 | 11篇 |
2020年 | 3篇 |
2019年 | 7篇 |
2018年 | 14篇 |
2017年 | 10篇 |
2016年 | 27篇 |
2015年 | 16篇 |
2014年 | 29篇 |
2013年 | 41篇 |
2012年 | 41篇 |
2011年 | 54篇 |
2010年 | 48篇 |
2009年 | 47篇 |
2008年 | 43篇 |
2007年 | 36篇 |
2006年 | 35篇 |
2005年 | 23篇 |
2004年 | 21篇 |
2003年 | 20篇 |
2002年 | 21篇 |
2001年 | 12篇 |
2000年 | 8篇 |
1999年 | 10篇 |
1998年 | 16篇 |
1997年 | 12篇 |
1996年 | 6篇 |
1995年 | 2篇 |
1994年 | 7篇 |
1993年 | 4篇 |
1992年 | 4篇 |
1990年 | 2篇 |
1989年 | 8篇 |
1987年 | 2篇 |
1986年 | 1篇 |
1985年 | 3篇 |
1984年 | 5篇 |
1983年 | 1篇 |
1981年 | 2篇 |
1979年 | 3篇 |
1978年 | 1篇 |
1977年 | 2篇 |
1976年 | 1篇 |
1975年 | 2篇 |
1973年 | 2篇 |
1972年 | 1篇 |
1971年 | 1篇 |
1970年 | 1篇 |
排序方式: 共有675条查询结果,搜索用时 15 毫秒
61.
Papadakis N Bugeau A 《IEEE transactions on pattern analysis and machine intelligence》2011,33(1):144-157
This work presents a new method for tracking and segmenting along time-interacting objects within an image sequence. One major contribution of the paper is the formalization of the notion of visible and occluded parts. For each object, we aim at tracking these two parts. Assuming that the velocity of each object is driven by a dynamical law, predictions can be used to guide the successive estimations. Separating these predicted areas into good and bad parts with respect to the final segmentation and representing the objects with their visible and occluded parts permit handling partial and complete occlusions. To achieve this tracking, a label is assigned to each object and an energy function representing the multilabel problem is minimized via a graph cuts optimization. This energy contains terms based on image intensities which enable segmenting and regularizing the visible parts of the objects. It also includes terms dedicated to the management of the occluded and disappearing areas, which are defined on the areas of prediction of the objects. The results on several challenging sequences prove the strength of the proposed approach. 相似文献
62.
63.
Nikos D. Lagaros Manolis Papadrakakis 《Computer Methods in Applied Mechanics and Engineering》2008,198(1):28-41
Performance-Based Design (PBD) methodologies is the contemporary trend in designing better and more economic earthquake-resistant structures where the main objective is to achieve more predictable and reliable levels of safety and operability against natural hazards. On the other hand, reliability-based optimization (RBO) methods directly account for the variability of the design parameters into the formulation of the optimization problem. The objective of this work is to incorporate PBD methodologies under seismic loading into the framework of RBO in conjunction with innovative tools for treating computational intensive problems of real-world structural systems. Two types of random variables are considered: Those which influence the level of seismic demand and those that affect the structural capacity. Reliability analysis is required for the assessment of the probabilistic constraints within the RBO formulation. The Monte Carlo Simulation (MCS) method is considered as the most reliable method for estimating the probabilities of exceedance or other statistical quantities albeit with excessive, in many cases, computational cost. First or Second Order Reliability Methods (FORM, SORM) constitute alternative approaches which require an explicit limit-state function. This type of limit-state function is not available for complex problems. In this study, in order to find the most efficient methodology for performing reliability analysis in conjunction with performance-based optimum design under seismic loading, a Neural Network approximation of the limit-state function is proposed and is combined with either MCS or with FORM approaches for handling the uncertainties. These two methodologies are applied in RBO problems with sizing and topology design variables resulting in two orders of magnitude reduction of the computational effort. 相似文献
64.
Leong Hou U Kyriakos Mouratidis Nikos Mamoulis 《The VLDB Journal The International Journal on Very Large Data Bases》2010,19(2):141-160
Consider a set of servers and a set of users, where each server has a coverage region (i.e., an area of service) and a capacity (i.e., a maximum number of users it can serve). Our task is to assign every user to one server subject to the coverage and capacity constraints. To offer the highest quality of service, we wish to minimize the average distance between users and their assigned server. This is an instance of a well-studied problem in operations research, termed optimal assignment. Even though there exist several solutions for the static case (where user locations are fixed), there is currently no method for dynamic settings. In this paper, we consider the continuous assignment problem (CAP), where an optimal assignment must be constantly maintained between mobile users and a set of servers. The fact that the users are mobile necessitates real-time reassignment so that the quality of service remains high (i.e., their distance from their assigned servers is minimized). The large scale and the time-critical nature of targeted applications require fast CAP solutions. We propose an algorithm that utilizes the geometric characteristics of the problem and significantly accelerates the initial assignment computation and its subsequent maintenance. Our method applies to different cost functions (e.g., average squared distance) and to any Minkowski distance metric (e.g., Euclidean, L 1 norm, etc.). 相似文献
65.
Felix Bießmann Frank C. Meinecke Arthur Gretton Alexander Rauch Gregor Rainer Nikos K. Logothetis Klaus-Robert Müller 《Machine Learning》2010,79(1-2):5-27
Data recorded from multiple sources sometimes exhibit non-instantaneous couplings. For simple data sets, cross-correlograms may reveal the coupling dynamics. But when dealing with high-dimensional multivariate data there is no such measure as the cross-correlogram. We propose a simple algorithm based on Kernel Canonical Correlation Analysis (kCCA) that computes a multivariate temporal filter which links one data modality to another one. The filters can be used to compute a multivariate extension of the cross-correlogram, the canonical correlogram, between data sources that have different dimensionalities and temporal resolutions. The canonical correlogram reflects the coupling dynamics between the two sources. The temporal filter reveals which features in the data give rise to these couplings and when they do so. We present results from simulations and neuroscientific experiments showing that tkCCA yields easily interpretable temporal filters and correlograms. In the experiments, we simultaneously performed electrode recordings and functional magnetic resonance imaging (fMRI) in primary visual cortex of the non-human primate. While electrode recordings reflect brain activity directly, fMRI provides only an indirect view of neural activity via the Blood Oxygen Level Dependent (BOLD) response. Thus it is crucial for our understanding and the interpretation of fMRI signals in general to relate them to direct measures of neural activity acquired with electrodes. The results computed by tkCCA confirm recent models of the hemodynamic response to neural activity and allow for a more detailed analysis of neurovascular coupling dynamics. 相似文献
66.
67.
Nikos Koutsias Magdalini Pleniou Giorgos Mallinis Foula Nioti Nikolas I. Sifakis 《International journal of remote sensing》2013,34(20):7049-7068
This study presents a new semi-automatic method to map burned areas by using multi-temporal Land Remote Sensing Satellite Program (Landsat) Thematic Mapper (TM) and Enhanced TM Plus (ETM+) images. The method consists of a set of rules that are valid especially when the post-fire satellite image has been captured shortly after the fire event. The overall accuracy of the method when applied to two case studies in Mt Parnitha and Samos Island in Greece were 95.69% and 93.98%, respectively. The commission and omission errors for Mt Parnitha were 6.92% and 10.24%, while those for Samos Island were 3.97% and 8.80%, respectively. Between the two types of error, it is preferred to minimize omission errors, since commission errors can be easily identified as part of product quality assessment and algorithm tuning procedures. The rule-based approach minimizes human interventions and makes it possible to run the mapping algorithm for a series of images that would otherwise need extensive time investment. In case of failure to capture burned areas correctly, it is possible either to make some adjustments by modifying the thresholding coefficients of the rules, or to discard some of the rules, since some editing is usually required to correct errors following the automated extraction procedures. When this method was applied to a series of US Geological Survey (USGS) Landsat TM and ETM+ archived satellite images covering the periods 1984–1991 and 1999–2009, a total of 1773 fires were identified and mapped from six different scenes that covered Attica and the Peloponnese in Greece. The majority of uncaptured burned areas corresponded to fires with size classes of 0–1 ha and 1–5 ha, where the loss in capturing fire scars is generally significant. This was expected since it is possible that small fires, identified and recorded by forest authorities, may not have been captured by satellite data due to limitations arising either from the spatial resolution of the sensor or imposed by the temporal series, which do not systematically cover the full period. 相似文献
68.
Albert Angel Nick Koudas Nikos Sarkas Divesh Srivastava Michael Svendsen Srikanta Tirthapura 《The VLDB Journal The International Journal on Very Large Data Bases》2014,23(2):175-199
Recent years have witnessed an unprecedented proliferation of social media. People around the globe author, everyday, millions of blog posts, social network status updates, etc. This rich stream of information can be used to identify, on an ongoing basis, emerging stories, and events that capture popular attention. Stories can be identified via groups of tightly coupled real-world entities, namely the people, locations, products, etc, that are involved in the story. The sheer scale and rapid evolution of the data involved necessitate highly efficient techniques for identifying important stories at every point of time. The main challenge in real-time story identification is the maintenance of dense subgraphs (corresponding to groups of tightly coupled entities) under streaming edge weight updates (resulting from a stream of user-generated content). This is the first work to study the efficient maintenance of dense subgraphs under such streaming edge weight updates. For a wide range of definitions of density, we derive theoretical results regarding the magnitude of change that a single edge weight update can cause. Based on these, we propose a novel algorithm, DynDens, which outperforms adaptations of existing techniques to this setting and yields meaningful, intuitive results. Our approach is validated by a thorough experimental evaluation on large-scale real and synthetic datasets. 相似文献
69.
Three-dimensional face recognition in the presence of facial expressions: an annotated deformable model approach 总被引:1,自引:0,他引:1
Kakadiaris IA Passalis G Toderici G Murtuza MN Lu Y Karampatziakis N Theoharis T 《IEEE transactions on pattern analysis and machine intelligence》2007,29(4):640-649
In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality 相似文献
70.
Manolis Maragoudakis Aristomenis Thanopoulos Nikos Fakotakis 《Intelligent Systems, IEEE》2007,22(1):67-77
When you've called a voice portal for any kind of information retrieval, chances are that an automated system guided the entire interaction. It might have correctly identified your goal, but probably only after asking too many questions. MeteoBayes is a meteorological information dialogue system that lets you use natural language to direct the interaction. Based on Bayesian networks, MeteoBayes' inference engine attempts to identify user intentions by consulting its past dialogue repository. For unfamiliar words, MeteoBayes has an unknown-term disambiguation module that learns word similarities from texts to avoid unnecessary system inquiries, thus speeding up the understanding process 相似文献