首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5815篇
  免费   276篇
  国内免费   21篇
电工技术   57篇
综合类   23篇
化学工业   1116篇
金属工艺   91篇
机械仪表   128篇
建筑科学   301篇
矿业工程   10篇
能源动力   240篇
轻工业   359篇
水利工程   55篇
石油天然气   31篇
无线电   533篇
一般工业技术   1177篇
冶金工业   846篇
原子能技术   46篇
自动化技术   1099篇
  2023年   52篇
  2022年   93篇
  2021年   185篇
  2020年   133篇
  2019年   177篇
  2018年   213篇
  2017年   136篇
  2016年   167篇
  2015年   133篇
  2014年   199篇
  2013年   353篇
  2012年   315篇
  2011年   353篇
  2010年   258篇
  2009年   265篇
  2008年   274篇
  2007年   277篇
  2006年   202篇
  2005年   194篇
  2004年   169篇
  2003年   175篇
  2002年   145篇
  2001年   86篇
  2000年   101篇
  1999年   75篇
  1998年   237篇
  1997年   164篇
  1996年   99篇
  1995年   108篇
  1994年   80篇
  1993年   71篇
  1992年   60篇
  1991年   45篇
  1990年   32篇
  1989年   32篇
  1988年   39篇
  1987年   35篇
  1986年   19篇
  1985年   29篇
  1984年   20篇
  1983年   15篇
  1982年   26篇
  1981年   29篇
  1980年   22篇
  1979年   21篇
  1978年   17篇
  1977年   26篇
  1976年   49篇
  1975年   14篇
  1972年   13篇
排序方式: 共有6112条查询结果,搜索用时 31 毫秒
141.
Multi-spectral fusion for surveillance systems   总被引:1,自引:0,他引:1  
Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult.By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system.We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective.  相似文献   
142.
Recent advances in algorithms for the multidimensional multiple choice knapsack problems have enabled us to solve rather large problem instances. However, these algorithms are evaluated with very limited benchmark instances. In this study, we propose new methods to systematically generate comprehensive benchmark instances. Some instances with special correlation properties between parameters are found to be several orders of magnitude harder than those currently used for benchmarking the algorithms. Experiments on an existing exact algorithm and two generic solvers show that instances whose weights are uncorrelated with the profits are easier compared with weakly or strongly correlated cases. Instances with classes containing similar set of profits for items and with weights strongly correlated to the profits are the hardest among all instance groups investigated. These hard instances deserve further study and understanding their properties may shed light to better algorithms.  相似文献   
143.
Combined analysis of multiple data sources has increasing application interest, in particular for distinguishing shared and source-specific aspects. We extend this rationale to the generative and non-parametric clustering setting by introducing a novel non-parametric hierarchical mixture model. The lower level of the model describes each source with a flexible non-parametric mixture, and the top level combines these to describe commonalities of the sources. The lower-level clusters arise from hierarchical Dirichlet Processes, inducing an infinite-dimensional contingency table between the sources. The commonalities between the sources are modeled by an infinite component model of the contingency table, interpretable as non-negative factorization of infinite matrices, or as a prior for infinite contingency tables. With Gaussian mixture components plugged in for continuous measurements, the model is applied to two views of genes, mRNA expression and abundance of the produced proteins, to expose groups of genes that are co-regulated in either or both of the views. We discover complex relationships between the marginals (that are multimodal in both marginals) that would remain undetected by simpler models. Cluster analysis of co-expression is a standard method of screening for co-regulation, and the two-view analysis extends the approach to distinguishing between pre- and post-translational regulation.  相似文献   
144.
The Internet Protocol (IP) has been proven very flexible, being able to accommodate all kinds of link technologies and supporting a broad range of applications. The basic principles of the original Internet architecture include end-to-end addressing, global routeability and a single namespace of IP addresses that unintentionally serves both as locators and host identifiers. The commercial success and widespread use of the Internet have lead to new requirements, which include Internetworking over business boundaries, mobility and multi-homing in an untrusted environment. Our approach to satisfy these new requirements is to introduce a new Internetworking layer, the node identity layer. Such a layer runs on top of the different versions of IP, but could also run directly on top of other kinds of network technologies, such as MPLS and 2G/3G PDP contexts. This approach enables connectivity across different communication technologies, supports mobility, multi-homing, and security from ground up. This paper describes the Node Identity Architecture in detail and discusses the experiences from implementing and running a prototype.  相似文献   
145.
The advent of large-scale distributed systems poses unique engineering challenges. In open systems such as the internet it is not possible to prescribe the behaviour of all of the components of the system in advance. Rather, we attempt to design infrastructure, such as network protocols, in such a way that the overall system is robust despite the fact that numerous arbitrary, non-certified, third-party components can connect to our system. Economists have long understood this issue, since it is analogous to the design of the rules governing auctions and other marketplaces, in which we attempt to achieve socially-desirable outcomes despite the impossibility of prescribing the exact behaviour of the market participants, who may attempt to subvert the market for their own personal gain. This field is known as “mechanism design”: the science of designing rules of a game to achieve a specific outcome, even though each participant may be self-interested. Although it originated in economics, mechanism design has become an important foundation of multi-agent systems (MAS) research. In a traditional mechanism design problem, analytical methods are used to prove that agents’ game-theoretically optimal strategies lead to socially desirable outcomes. In many scenarios, traditional mechanism design and auction theory yield clear-cut results; however, there are many situations in which the underlying assumptions of the theory are violated due to the messiness of the real-world. In this paper we review alternative approaches to mechanism design which treat it as an engineering problem and bring to bear engineering design principles, viz.: iterative step-wise refinement of solutions, and satisficing instead of optimization in the face of intractable complexity. We categorize these approaches under the banner of evolutionary mechanism design.  相似文献   
146.
This paper addresses the cooperative localization and visual mapping problem with multiple heterogeneous robots. The approach is designed to deal with the challenging large semi-structured outdoors environments in which aerial/ground ensembles are to evolve. We propose the use of heterogeneous visual landmarks, points and line segments, to achieve effective cooperation in such environments. A large-scale SLAM algorithm is generalized to handle multiple robots, in which a global graph maintains the relative relationships between a series of local sub-maps built by the different robots. The key issue when dealing with multiple robots is to find the link between them, and to integrate these relations to maintain the overall geometric consistency; the events that introduce these links on the global graph are described in detail. Monocular cameras are considered as the primary extereoceptive sensor. In order to achieve the undelayed initialization required by the bearing-only observations, the well-known inverse-depth parametrization is adopted to estimate 3D points. Similarly, to estimate 3D line segments, we present a novel parametrization based on anchored Plücker coordinates, to which extensible endpoints are added. Extensive simulations show the proposed developments, and the overall approach is illustrated using real-data taken with a helicopter and a ground rover.  相似文献   
147.
One approach to limiting disclosure risk in public-use microdata is to release multiply-imputed, partially synthetic data sets. These are data on actual respondents, but with confidential data replaced by multiply-imputed synthetic values. A mis-specified imputation model can invalidate inferences based on the partially synthetic data, because the imputation model determines the distribution of synthetic values. We present a practical method to generate synthetic values when the imputer has only limited information about the true data generating process. We combine a simple imputation model (such as regression) with density-based transformations that preserve the distribution of the confidential data, up to sampling error, on specified subdomains. We demonstrate through simulations and a large scale application that our approach preserves important statistical properties of the confidential data, including higher moments, with low disclosure risk.  相似文献   
148.
Simon S. W. Li 《Ergonomics》2016,59(11):1494-1504
Change in sagittal spinal curvature from the neutral upright stance is an important measure of the heaviness and correctness of backpack use. As current recommendations, with respect to spinal profile, of backpack load thresholds were based on the significant curvature change in individual spinal region only, this study investigated the most critical backpack load by assessing simultaneously the spinal curvature changes along the whole spine. A motion analysis system was used to measure the curvature changes in cervical, upper thoracic, lower thoracic and lumbar regions with backpack load at 0, 5, 10, 15 and 20% of body weight. A multi-objective goal programming model was adopted to determine the global critical load of maximum curvature change of the whole spine in accordance with the maximum curvature changes of the four spinal regions. Results suggested that the most critical backpack load was 13% of body weight for healthy male college students.

Practitioner Summary: As current recommendations of backpack load thresholds were based on the significant curvature change in individual spinal region only, this study investigated the backpack load by considering simultaneously the spinal curvature changes along the whole spine. The recommendation, in terms of the global critical load, was 13% of body weight for healthy male college students.  相似文献   

149.
This article presents an automated Sentinel-1-based processing chain designed for flood detection and monitoring in near-real-time (NRT). Since no user intervention is required at any stage of the flood mapping procedure, the processing chain allows deriving time-critical disaster information in less than 45 min after a new data set is available on the Sentinel Data Hub of the European Space Agency (ESA). Due to the systematic acquisition strategy and high repetition rate of Sentinel-1, the processing chain can be set up as a web-based service that regularly informs users about the current flood conditions in a given area of interest. The thematic accuracy of the thematic processor has been assessed for two test sites of a flood situation at the border between Greece and Turkey with encouraging overall accuracies between 94.0% and 96.1% and Cohen’s kappa coefficients (κ) ranging from 0.879 to 0.910. The accuracy assessment, which was performed separately for the standard polarizations (VV/VH) of the interferometric wide swath (IW) mode of Sentinel-1, further indicates that under calm wind conditions, slightly higher thematic accuracies can be achieved by using VV instead of VH polarization data.  相似文献   
150.
During a crisis citizens reach for their smart phones to report, comment and explore information surrounding the crisis. These actions often involve social media and this data forms a large repository of real-time, crisis related information. Law enforcement agencies and other first responders see this information as having untapped potential. That is, it has the capacity extend their situational awareness beyond the scope of a usual command and control centre. Despite this potential, the sheer volume, the speed at which it arrives, and unstructured nature of social media means that making sense of this data is not a trivial task and one that is not yet satisfactorily solved; both in crisis management and beyond. Therefore we propose a multi-stage process to extract meaning from this data that will provide relevant and near real-time information to command and control to assist in decision support. This process begins with the capture of real-time social media data, the development of specific LEA and crisis focused taxonomies for categorisation and entity extraction, the application of formal concept analysis for aggregation and corroboration and the presentation of this data via map-based and other visualisations. We demonstrate that this novel use of formal concept analysis in combination with context-based entity extraction has the potential to inform law enforcement and/or humanitarian responders about on-going crisis events using social media data in the context of the 2015 Nepal earthquake.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号