首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   51220篇
  免费   3305篇
  国内免费   46篇
电工技术   368篇
综合类   21篇
化学工业   10685篇
金属工艺   920篇
机械仪表   1054篇
建筑科学   1512篇
矿业工程   106篇
能源动力   1131篇
轻工业   9037篇
水利工程   530篇
石油天然气   201篇
武器工业   6篇
无线电   1994篇
一般工业技术   8492篇
冶金工业   11226篇
原子能技术   197篇
自动化技术   7091篇
  2024年   111篇
  2023年   460篇
  2022年   625篇
  2021年   1298篇
  2020年   1090篇
  2019年   1210篇
  2018年   2052篇
  2017年   2009篇
  2016年   2129篇
  2015年   1602篇
  2014年   1941篇
  2013年   4077篇
  2012年   3055篇
  2011年   2878篇
  2010年   2377篇
  2009年   2129篇
  2008年   2118篇
  2007年   1976篇
  2006年   1356篇
  2005年   1156篇
  2004年   1053篇
  2003年   990篇
  2002年   916篇
  2001年   701篇
  2000年   626篇
  1999年   671篇
  1998年   3554篇
  1997年   2353篇
  1996年   1514篇
  1995年   892篇
  1994年   680篇
  1993年   745篇
  1992年   263篇
  1991年   253篇
  1990年   196篇
  1989年   206篇
  1988年   215篇
  1987年   191篇
  1986年   171篇
  1985年   195篇
  1984年   169篇
  1983年   116篇
  1982年   155篇
  1981年   182篇
  1980年   197篇
  1979年   95篇
  1978年   97篇
  1977年   339篇
  1976年   724篇
  1973年   86篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
901.
A photoacoustic (PA) methodology, in the transmission configuration, for simultaneous measurements of thermal effusivity and molar absorption coefficient (absorptivity) for pigments in liquid solution is introduced. The analytical treatment involves a self-normalization procedure for the PA signal, as a function of the modulation frequency, for a strong absorbing material in the thermally thin regime, when the light travels across the sample under study. Two fitted parameters are obtained from the analysis of the self-normalized PA amplitude and phase, one of them proportional to the sample's optical absorption coefficient and from which, taking it for a series of samples at different concentrations, the pigment's absorptivity in liquid solution can be measured, the other one yields the sample's thermal effusivity. Methylene blue's absorptivity in distilled water was measured with this methodology at 658 nm, finding good agreement with the corresponding one reported in the literature.  相似文献   
902.
The introduction of learning to the search mechanisms of optimization algorithms has been nominated as one of the viable approaches when dealing with complex optimization problems, in particular with multi-objective ones. One of the forms of carrying out this hybridization process is by using multi-objective optimization estimation of distribution algorithms (MOEDAs). However, it has been pointed out that current MOEDAs have an intrinsic shortcoming in their model-building algorithms that hamper their performance. In this work, we put forward the argument that error-based learning, the class of learning most commonly used in MOEDAs is responsible for current MOEDA underachievement. We present adaptive resonance theory (ART) as a suitable learning paradigm alternative and present a novel algorithm called multi-objective ART-based EDA (MARTEDA) that uses a Gaussian ART neural network for model-building and a hypervolume-based selector as described for the HypE algorithm. In order to assert the improvement obtained by combining two cutting-edge approaches to optimization an extensive set of experiments are carried out. These experiments also test the scalability of MARTEDA as the number of objective functions increases.  相似文献   
903.
In this work, we analyze variable space diversity of Pareto optimal solutions (POS) and study the effectiveness of crossover and mutation operators in evolutionary many-objective optimization. First we examine the diversity of variables in the true POS on many-objective 0/1 knapsack problems with up to 20 items (bits), showing that variables in POS become noticeably diverse as we increase the number of objectives. We also verify the effectiveness of conventional two-point and uniform crossovers, Local Recombination that selects mating parents based on proximity in objective space, and two-point and uniform crossover operators which Controls the maximum number of Crossed Genes (CCG). We use NSGA-II, SPEA2, IBEA ??+? and MSOPS, which adopt different selection methods, and many-objective 0/1 knapsack problems with $n=\{100,250,500,750,\mbox{1,000}\}$ items (bits) and m?=?{2,4,6,8,10} objectives to verify the search performance of each crossover operator. Simulation results reveal that Local Recombination and CCG operators significantly improve search performance especially for NSGA-II and MSOPS, which have high diversity of genes in the population. Also, results show that CCG operators achieve higher search performance than Local Recombination for m?≥?4 objectives and that their effectiveness becomes larger as the number of objectives m increases. In addition, the contribution of CCG and mutation operators for the solutions search is analyzed and discussed.  相似文献   
904.
This paper describes a novel model for fetal heart rate (FHR) monitoring from single-lead mother?s abdomen ECG (AECG) measurements. This novel method is divided in two stages: the first step consists on a one-step wavelet-based preprocessing for simultaneous baseline and high-frequency noise suppression, while the second stage efficiently detects fetal QRS complexes allowing FHR monitoring. The presented structure has been simplified as much as possible, in order to reduce computational cost and thus enable possible custom hardware implementations. Moreover, the proposed scheme and its fixed-point modeling have been tested using real abdominal ECG signals, which allow the validation of the presented approach and provide high accuracy.  相似文献   
905.
Vehicular Ad Hoc Networks (VANETs) require mechanisms to authenticate messages, identify valid vehicles, and remove misbehaving vehicles. A public key infrastructure (PKI) can be used to provide these functionalities using digital certificates. However, if a vehicle is no longer trusted, its certificates have to be revoked and this status information has to be made available to other vehicles as soon as possible. In this paper, we propose a collaborative certificate status checking mechanism called COACH to efficiently distribute certificate revocation information in VANETs. In COACH, we embed a hash tree in each standard Certificate Revocation List (CRL). This dual structure is called extended-CRL. A node possessing an extended-CRL can respond to certificate status requests without having to send the complete CRL. Instead, the node can send a short response (less than 1 kB) that fits in a single UDP message. Obviously, the substructures included in the short responses are authenticated. This means that any node possessing an extended-CRL can produce short responses that can be authenticated (including Road Side Units or intermediate vehicles). We also propose an extension to the COACH mechanism called EvCOACH that is more efficient than COACH in scenarios with relatively low revocation rates per CRL validity period. To build EvCOACH, we embed an additional hash chain in the extended-CRL. Finally, by conducting a detailed performance evaluation, COACH and EvCOACH are proved to be reliable, efficient, and scalable.  相似文献   
906.
Forwarding data in scenarios where devices have sporadic connectivity is a challenge. An example scenario is a disaster area, where forwarding information generated in the incident location, like victims' medical data, to a coordination point is critical for quick, accurate and coordinated intervention. New applications are being developed based on mobile devices and wireless opportunistic networks as a solution to destroyed or overused communication networks. But the performance of opportunistic routing methods applied to emergency scenarios is unknown today. In this paper, we compare and contrast the efficiency of the most significant opportunistic routing protocols through simulations in realistic disaster scenarios in order to show how the different characteristics of an emergency scenario impact in the behaviour of each one of them.  相似文献   
907.
Both image compression based on color quantization and image segmentation are two typical tasks in the field of image processing. Several techniques based on splitting algorithms or cluster analyses have been proposed in the literature. Self-organizing maps have been also applied to these problems, although with some limitations due to the fixed network architecture and the lack of representation in hierarchical relations among data. In this paper, both problems are addressed using growing hierarchical self-organizing models. An advantage of these models is due to the hierarchical architecture, which is more flexible in the adaptation process to input data, reflecting inherent hierarchical relations among data. Comparative results are provided for image compression and image segmentation. Experimental results show that the proposed approach is promising for image processing, and the powerful of the hierarchical information provided by the proposed model.  相似文献   
908.
Edge matching puzzles have been amongst us for a long time now and traditionally they have been considered, both, a children’s game and an interesting mathematical divertimento. Their main characteristics have already been studied, and their worst-case complexity has been properly classified as a NP-complete problem. It is in recent times, specially after being used as the problem behind a money-prized contest, with a prize of 2US$ million for the first solver, that edge matching puzzles have attracted mainstream attention from wider audiences, including, of course, computer science people working on solving hard problems. We consider these competitions as an interesting opportunity to showcase SAT/CSP solving techniques when confronted to a real world problem to a broad audience, a part of the intrinsic, i.e. monetary, interest of such a contest. This article studies the NP-complete problem known as edge matching puzzle using SAT and CSP approaches for solving it. We will focus on providing, first and foremost, a theoretical framework, including a generalized definition of the problem. We will design and show algorithms for easy and fast problem instances generation, generators with easily tunable hardness. Afterwards we will provide with SAT and CSP models for the problems and we will study problem complexity, both typical case and worst-case complexity. We will also provide some specially crafted heuristics that result in a boost in solving time and study which is the effect of such heuristics.  相似文献   
909.
We introduce WSimply, a new framework for modelling and solving Weighted Constraint Satisfaction Problems (WCSP) using Satisfiability Modulo Theories (SMT) technology. In contrast to other well-known approaches designed for extensional representation of goods or no-goods, and with few declarative facilities, our approach aims to follow an intensional and declarative syntax style. In addition, our language has built-in support for some meta-constraints, such as priority and homogeneity, which allows the user to easily specify rich requirements on the desired solutions, such as preferences and fairness. We propose two alternative strategies for solving these WCSP instances using SMT. The first is the reformulation into Weighted SMT (WSMT) and the application of satisfiability test based algorithms from recent contributions in the Weighted Maximum Satisfiability field. The second one is the reformulation into an operation research-like style which involves an optimisation variable or objective function and the application of optimisation SMT solvers. We present experimental results of two well-known problems: the Nurse Rostering Problem (NRP) and a variant of the Balanced Academic Curriculum Problem (BACP), and provide some insights into the impact of the addition of meta-constraints on the quality of the solutions and the solving time.  相似文献   
910.
Inductive Logic Programming (ILP) deals with the problem of finding a hypothesis covering positive examples and excluding negative examples, where both hypotheses and examples are expressed in first-order logic. In this paper we employ constraint satisfaction techniques to model and solve a problem known as template ILP consistency, which assumes that the structure of a hypothesis is known and the task is to find unification of the contained variables. In particular, we present a constraint model with index variables accompanied by a Boolean model to strengthen inference and hence improve efficiency. The efficiency of models is demonstrated experimentally.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号