首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1494篇
  免费   80篇
  国内免费   2篇
电工技术   23篇
化学工业   489篇
金属工艺   37篇
机械仪表   43篇
建筑科学   59篇
矿业工程   1篇
能源动力   31篇
轻工业   96篇
水利工程   2篇
石油天然气   6篇
武器工业   1篇
无线电   77篇
一般工业技术   354篇
冶金工业   75篇
原子能技术   11篇
自动化技术   271篇
  2023年   11篇
  2022年   71篇
  2021年   87篇
  2020年   35篇
  2019年   45篇
  2018年   65篇
  2017年   52篇
  2016年   43篇
  2015年   64篇
  2014年   68篇
  2013年   111篇
  2012年   87篇
  2011年   96篇
  2010年   75篇
  2009年   75篇
  2008年   59篇
  2007年   72篇
  2006年   50篇
  2005年   28篇
  2004年   29篇
  2003年   22篇
  2002年   36篇
  2001年   10篇
  2000年   17篇
  1999年   6篇
  1998年   14篇
  1997年   14篇
  1996年   15篇
  1995年   10篇
  1994年   11篇
  1993年   18篇
  1992年   11篇
  1991年   6篇
  1989年   6篇
  1988年   8篇
  1987年   9篇
  1986年   8篇
  1985年   5篇
  1984年   7篇
  1983年   7篇
  1982年   4篇
  1981年   16篇
  1980年   11篇
  1979年   11篇
  1978年   6篇
  1977年   7篇
  1976年   10篇
  1975年   10篇
  1974年   4篇
  1973年   9篇
排序方式: 共有1576条查询结果,搜索用时 15 毫秒
31.
Abstract: The paper presents a novel machine learning algorithm used for training a compound classifier system that consists of a set of area classifiers. Area classifiers recognize objects derived from the respective competence area. Splitting feature space into areas and selecting area classifiers are two key processes of the algorithm; both take place simultaneously in the course of an optimization process aimed at maximizing the system performance. An evolutionary algorithm is used to find the optimal solution. A number of experiments have been carried out to evaluate system performance. The results prove that the proposed method outperforms each elementary classifier as well as simple voting.  相似文献   
32.
We present a method to speed up the dynamic program algorithms used for solving the HMM decoding and training problems for discrete time-independent HMMs. We discuss the application of our method to Viterbi’s decoding and training algorithms (IEEE Trans. Inform. Theory IT-13:260–269, 1967), as well as to the forward-backward and Baum-Welch (Inequalities 3:1–8, 1972) algorithms. Our approach is based on identifying repeated substrings in the observed input sequence. Initially, we show how to exploit repetitions of all sufficiently small substrings (this is similar to the Four Russians method). Then, we describe four algorithms based alternatively on run length encoding (RLE), Lempel-Ziv (LZ78) parsing, grammar-based compression (SLP), and byte pair encoding (BPE). Compared to Viterbi’s algorithm, we achieve speedups of Θ(log n) using the Four Russians method, using RLE, using LZ78, using SLP, and Ω(r) using BPE, where k is the number of hidden states, n is the length of the observed sequence and r is its compression ratio (under each compression scheme). Our experimental results demonstrate that our new algorithms are indeed faster in practice. We also discuss a parallel implementation of our algorithms. A preliminary version of this paper appeared in Proc. 18th Annual Symposium on Combinatorial Pattern Matching (CPM), pp. 4–15, 2007. Y. Lifshits’ research was supported by the Center for the Mathematics of Information and the Lee Center for Advanced Networking. S. Mozes’ work conducted while visiting MIT.  相似文献   
33.
Online bin stretching is a semi-online variant of bin packing in which the algorithm has to use the same number of bins as an optimal packing, but is allowed to slightly overpack the bins. The goal is to minimize the amount of overpacking, i.e., the maximum size packed into any bin. We give an algorithm for online bin stretching with a stretching factor of \(11/8 = 1.375\) for three bins. Additionally, we present a lower bound of \(45/33 = 1.\overline{36}\) for online bin stretching on three bins and a lower bound of 19/14 for four and five bins that were discovered using a computer search.  相似文献   
34.
The present paper deals with the problem of solving the (\(n^2 - 1\))-puzzle and cooperative path-finding (CPF) problems sub-optimally by rule-based algorithms. To solve the puzzle, we need to rearrange \(n^2 - 1\) pebbles in the \(n \times n\)-sized square grid using one vacant position to achieve the goal configuration. An improvement to the existing polynomial-time algorithm is proposed and experimentally analyzed. The improved algorithm represents an attempt to move pebbles in a more efficient way compared to the original algorithm by grouping them into so-called snakes and moving them together as part of a snake formation. An experimental evaluation has shown that the snakeenhanced algorithm produces solutions which are 8–9 % shorter than the solutions generated by the original algorithm. Snake-like movement has also been integrated into the rule-based algorithms used in solving CPF problems sub-optimally, which is a closely related task. The task in CPF consists in moving a group of abstract robots on an undirected graph to specific vertices. The robots can move to unoccupied neighboring vertices; no more than one robot can be placed in each vertex. The (\(n^2 - 1\))-puzzle is a special case of CPF where the underlying graph is a 4-connected grid and only one vertex is vacant. Two major rule-based algorithms for solving CPF problems were included in our study—BIBOX and PUSH-and-SWAP (PUSH-and-ROTATE). The use of snakes in the BIBOX algorithm led to consistent efficiency gains of around 30 % for the (\(n^2 - 1\))-puzzle and up to 50 % in for CPF problems on biconnected graphs with various ear decompositions and multiple vacant vertices. For the PUSH-and-SWAP algorithm, the efficiency gain achieved from the use of snakes was around 5–8 %. However, the efficiency gain was unstable and hardly predictable for PUSH-and-SWAP.  相似文献   
35.
We propose a novel algorithm, called REGGAE, for the generation of momenta of a given sample of particle masses, evenly distributed in Lorentz-invariant phase space and obeying energy and momentum conservation. In comparison to other existing algorithms, REGGAE is designed for the use in multiparticle production in hadronic and nuclear collisions where many hadrons are produced and a large part of the available energy is stored in the form of their masses. The algorithm uses a loop simulating multiple collisions which lead to production of configurations with reasonably large weights.

Program summary

Program title: REGGAE (REscattering-after-Genbod GenerAtor of Events)Catalogue identifier: AEJR_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJR_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 1523No. of bytes in distributed program, including test data, etc.: 9608Distribution format: tar.gzProgramming language: C++Computer: PC Pentium 4, though no particular tuning for this machine was performed.Operating system: Originally designed on Linux PC with g++, but it has been compiled and ran successfully on OS X with g++ and MS Windows with Microsoft Visual C++ 2008 Express Edition, as well.RAM: This depends on the number of particles which are generated. For 10 particles like in the attached example it requires about 120 kB.Classification: 11.2Nature of problem: The task is to generate momenta of a sample of particles with given masses which obey energy and momentum conservation. Generated samples should be evenly distributed in the available Lorentz-invariant phase space.Solution method: In general, the algorithm works in two steps. First, all momenta are generated with the GENBOD algorithm. There, particle production is modeled as a sequence of two-body decays of heavy resonances. After all momenta are generated this way, they are reshuffled. Each particle undergoes a collision with some other partner such that in the pair center of mass system the new directions of momenta are distributed isotropically. After each particle collides only a few times, the momenta are distributed evenly across the whole available phase space. Starting with GENBOD is not essential for the procedure but it improves the performance.Running time: This depends on the number of particles and number of events one wants to generate. On a LINUX PC with 2 GHz processor, generation of 1000 events with 10 particles each takes about 3 s.  相似文献   
36.
This paper shows how to improve holistic face analysis by assigning importance factors to different facial regions (termed as face relevance maps). We propose a novel supervised learning algorithm for generating face relevance maps to improve the discriminating capability of existing methods. We have successfully applied the developed technique to face identification based on the Eigenfaces and Fisherfaces methods, and also to gender classification based on principal geodesic analysis (PGA). We demonstrate how to iteratively learn the face relevance map using labelled data. Experimental results confirm the effectiveness of the developed approach.  相似文献   
37.
A visual appearance of natural materials significantly depends on acquisition circumstances, particularly illumination conditions and viewpoint position, whose variations cause difficulties in the analysis of real scenes. We address this issue with novel texture features, based on fast estimates of Markovian statistics, that are simultaneously rotation and illumination invariant. The proposed features are invariant to in-plane material rotation and illumination spectrum (colour invariance), they are robust to local intensity changes (cast shadows) and illumination direction. No knowledge of illumination conditions is required and recognition is possible from a single training image per material. The material recognition is tested on the currently most realistic visual representation - Bidirectional Texture Function (BTF), using CUReT and ALOT texture datasets with more than 250 natural materials. Our proposed features significantly outperform leading alternatives including Local Binary Patterns (LBP, LBP-HF) and texton MR8 methods.  相似文献   
38.
Our goal is an automated 2D-image-pair registration algorithm capable of aligning images taken of a wide variety of natural and man-made scenes as well as many medical images. The algorithm should handle low overlap, substantial orientation and scale differences, large illumination variations, and physical changes in the scene. An important component of this is the ability to automatically reject pairs that have no overlap or have too many differences to be aligned well.We propose a complete algorithm, including techniques for initialization, for estimating transformation parameters, and for automatically deciding if an estimate is correct. Keypoints extracted and matched between images are used to generate initial similarity transform estimates, each accurate over a small region. These initial estimates are rank-ordered and tested individually in succession. Each estimate is refined using the Dual-Bootstrap ICP algorithm, driven by matching of multiscale features. A three-part decision criteria, combining measurements of alignment accuracy, stability in the estimate, and consistency in the constraints, determines whether the refined transformation estimate is accepted as correct. Experimental results on a data set of 22 challenging image pairs show that the algorithm effectively aligns 19 of the 22 pairs and rejects 99.8% of the misalignments that occur when all possible pairs are tried. The algorithm substantially out-performs algorithms based on keypoint matching alone.  相似文献   
39.
40.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号