首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2742篇
  免费   188篇
  国内免费   9篇
电工技术   57篇
综合类   1篇
化学工业   842篇
金属工艺   34篇
机械仪表   66篇
建筑科学   74篇
矿业工程   3篇
能源动力   105篇
轻工业   317篇
水利工程   10篇
石油天然气   1篇
无线电   204篇
一般工业技术   442篇
冶金工业   41篇
原子能技术   33篇
自动化技术   709篇
  2024年   4篇
  2023年   44篇
  2022年   191篇
  2021年   205篇
  2020年   80篇
  2019年   98篇
  2018年   117篇
  2017年   90篇
  2016年   137篇
  2015年   114篇
  2014年   152篇
  2013年   225篇
  2012年   172篇
  2011年   199篇
  2010年   139篇
  2009年   167篇
  2008年   138篇
  2007年   99篇
  2006年   97篇
  2005年   89篇
  2004年   65篇
  2003年   49篇
  2002年   42篇
  2001年   30篇
  2000年   25篇
  1999年   16篇
  1998年   27篇
  1997年   16篇
  1996年   23篇
  1995年   14篇
  1994年   5篇
  1993年   8篇
  1992年   6篇
  1991年   7篇
  1990年   3篇
  1989年   5篇
  1988年   2篇
  1987年   5篇
  1985年   2篇
  1984年   4篇
  1983年   4篇
  1982年   4篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
  1977年   4篇
  1976年   2篇
  1975年   3篇
  1974年   1篇
  1973年   1篇
排序方式: 共有2939条查询结果,搜索用时 11 毫秒
991.
In this paper we give efficient distributed algorithms computing approximate solutions to general scheduling and matching problems. All approximation guarantees are within a constant factor of the optimum. By “efficient”, we mean that the number of communication rounds is poly-logarithmic in the size of the input. In the scheduling problem, we have a bipartite graph with computing agents on one side and resources on the other. Agents that share a resource can communicate in one time step. Each agent has a list of jobs, each with its own length and profit, to be executed on a neighbouring resource within a given time-window. Each job is also associated with a rational number in the range between zero and one (width), specifying the amount of resource required by the job. Resources can execute non preemptively multiple jobs whose total width at any given time is at most one. The goal is to maximize the profit of the jobs that are scheduled. We then adapt our algorithm for scheduling, to solve the weighted b-matching problem, which is the generalization of the weighted matching problem where for each vertex v, at most b(v) edges incident to v, can be included in the matching. For this problem we obtain a randomized distributed algorithm with approximation guarantee of \frac16+e{\frac{1}{6+\epsilon}}, for any ${\epsilon >0 }${\epsilon >0 }. For weighted matching, we devise a deterministic distributed algorithm with the same approximation ratio. To our knowledge, we give the first distributed algorithm for the aforementioned scheduling problem as well as the first deterministic distributed algorithm for weighted matching with poly-logaritmic running time. A very interesting feature of our algorithms is that they are all derived in a systematic manner from primal-dual algorithms.  相似文献   
992.
This paper presents a handwriting recognition system that deals with unconstrained handwriting and large vocabularies. The system is based on the segmentation-recognition paradigm where words are first loosely segmented into characters or pseudocharacters and the final segmentation is obtained during the recognition process, which is carried out with a lexicon. Characters are modeled by multiple hidden Markov models (HMMs), which are concatenated to build up word models. The lexicon is organized as a tree structure, and during the decoding words with similar prefixes share the same computation steps. To avoid an explosion of the search space due to the presence of multiple character models, a lexicon-driven level building algorithm (LDLBA) is used to decode the lexical tree and to choose at each level the more likely models. Bigram probabilities related to the variation of writing styles within the words are inserted between the levels of the LDLBA to improve the recognition accuracy. To further speed up the recognition process, some constraints are added to limit the search efforts to the more likely parts of the search space. Experimental results on a dataset of 4674 unconstrained words show that the proposed recognition system achieves recognition rates from 98% for a 10-word vocabulary to 71% for a 30,000-word vocabulary and recognition times from 9 ms to 18.4 s, respectively.Received: 8 July 2002, Accepted: 1 July 2003, Published online: 12 September 2003 Correspondence to: Alessandro L. Koerich  相似文献   
993.
The length of the longest common subsequence (LCS) between two strings of M and N characters can be computed by an O(M × N) dynamic programming algorithm, that can be executed in O(M + N) steps by a linear systolic array. It has been recently shown that the LCS between run-length-encoded (RLE) strings of m and n runs can be computed by an O(nM + Nm − nm) algorithm that could be executed in O(m + n) steps by a parallel hardware. However, the algorithm cannot be directly mapped on a linear systolic array because of its irregular structure.In this paper, we propose a modified algorithm that exhibits a more regular structure at the cost of a marginal reduction of the efficiency of RLE. We outline the algorithm and we discuss its mapping on a linear systolic array.  相似文献   
994.
We present the design of a BPEL orchestration engine based on ReSpecT tuple centres, a coordination model extending Linda with the ability of declaratively programming the reactive behaviour of tuple spaces. Architectural and linguistic aspects of our solution are discussed, focussing on how the syntax and semantics of BPEL have been mapped to tuple centres. This is achieved by a translation of BPEL specifications to set of logic tuples, and conceiving the execution cycle of the orchestration engine in terms of ReSpecT reactions.  相似文献   
995.
PHANTOM is a tree level Monte Carlo for six parton final states at proton-proton, proton-antiproton and electron-positron colliders at and including possible interferences between the two sets of diagrams. This comprehends all purely electroweak contributions as well as all contributions with one virtual or two external gluons. It can generate unweighted events for any set of processes and it is interfaced to parton shower and hadronization packages via the latest Les Houches Accord protocol. It can be used to analyze the physics of boson-boson scattering, Higgs boson production in boson-boson fusion, and three boson production.

Program summary

Program title:PHANTOM (V. 1.0)Catalogue identifier: AECE_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECE_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 175 787No. of bytes in distributed program, including test data, etc.: 965 898Distribution format: tar.gzProgramming language: Fortran 77Computer: Any with a UNIX, LINUX compatible Fortran compilerOperating system: UNIX, LINUXRAM: 500 MBClassification: 11.1External routines: LHAPDF (Les Houches Accord PDF Interface, http://projects.hepforge.org/lhapdf/), CIRCE (beamstrahlung for e+e ILC collider).Nature of problem: Six fermion final state processes have become important with the increase of collider energies and are essential for the study of top, Higgs and electroweak symmetry breaking physics at high energy colliders. Since thousands of Feynman diagrams contribute in a single process and events corresponding to hundreds of different final states need to be generated, a fast and stable calculation is needed.Solution method:PHANTOM is a tree level Monte Carlo for six parton final states at proton-proton, proton-antiproton and electron-positron colliders. It computes all amplitudes at and including possible interferences between the two sets of diagrams. The matrix elements are computed with the helicity formalism implemented in the program PHACT [1]. The integration makes use of an iterative-adaptive multichannel method which, relying on adaptivity, allows the use of only a few channels per process. Unweighted event generation can be performed for any set of processes and it is interfaced to parton shower and hadronization packages via the latest Les Houches Accord protocol.Restrictions: All Feynman diagrams are computed al LO.Unusual features: Phantom is written in Fortran 77 but it makes use of structures. The g77 compiler cannot compile it as it does not recognize the structures. The Intel, Portland Group, True64 HP Fortran 77 or Fortran 90 compilers have been tested and can be used.Running time: A few hours for a cross section integration of one process at per mille accuracy. One hour for one thousand unweighted events.References:
[1]
A. Ballestrero, E. Maina, Phys. Lett. B 350 (1995) 225, hep-ph/9403244; A. Ballestrero, PHACT 1.0, Program for helicity amplitudes Calculations with Tau matrices, hep-ph/9911318, in: B.B. Levchenko, V.I. Savrin (Eds.), Proceedings of the 14th International Workshop on High Energy Physics and Quantum Field Theory (QFTHEP 99), SINP MSU, Moscow, p. 303.
  相似文献   
996.
We consider the problem of trajectory planning and control for an XYnR? Planar robot with the first two joints (rotational or prismatic) actuated and n rotational passive joints, moving both in the presence and the absence of gravity. Under the assumption that each passive link is attached at the center of percussion of the previous passive link, dynamics of the system can be expressed through the behavior of n special points of the plane. These points are called link‐related acceleration points (LRAP) since their instantaneous acceleration is oriented as the axis of the related passive links. Moreover, LRAP dynamics present a backward recursive form which can be exploited to recursively design a dynamic feedback that completely linearizes the system equations. We use this approach to solve trajectory planning and tracking problems and report simulation results obtained for an RR2R? robot having the first two rotational joints actuated. © 2003 Wiley Periodicals, Inc.  相似文献   
997.
998.
The effect of different roasting conditions on the antioxidant properties and the phenolic content of quinoa seeds was studied. Advanced and final products of the Maillard reaction were also quantified in order to evaluate the contribution to DPPH radical scavenging capacity and reducing power of samples. In general, response surface analysis showed significant increases in the phenolic content, the antioxidant activity and the level of Maillard reaction products (MRPs), mainly as processing temperature increased, while roasting time had a minor impact on these response variables. The highest antioxidant activity was achieved in extracts of quinoa seeds roasted at 190 °C for 30 min. Principal component analysis applied to the data suggested that MRPs had a greater contribution to antioxidant properties than phenolic compounds in the processed samples. These results demonstrated that roasted quinoa seeds/flour may be considered as a nontraditional ingredient with enhanced antioxidant capacity for the production of functional foods.  相似文献   
999.
Allergy to cow's milk proteins is a challenging condition in early infancy. Allergic infants may be predisposed to impairments of growth from either the disease itself or the nutritional constraints of the exclusion diet they should follow. Formulae based on extensively hydrolyzed cow's milk proteins are widely used, representing therapy, and constituting 100% nutrient source in the first four to six months of life and half the daily nutrient intake in the second semester of life. In some cases, these products are used also for preventive purposes. Some impairments in growth have been reported for infants using these products, even if mostly limited to the first year of life, with no apparent consequences in either the medium or long term. The macronutrient content of infant formulae based on protein hydrolysates, whichever the source, should carefully be tested not only as far as the optimal utilization of nitrogenous sources but also on the nature and metabolic fate of non-nitrogen caloric sources, represented by carbohydrates and fats, and micronutrients, particularly iron. It is recommended that studies aimed at the allergologic effects of these products also include an appropriate nutritional evaluation to determine their efficiency.  相似文献   
1000.
High-resolution, detailed 3D reconstructions of biological specimens obtained from scanning electron microscopy stereo-micrographs and proprietary software were compared with Tapping-Mode AFM datasets of the same fields. The reconstruction software implements several original solutions including a neural adaptive point-matching technique, the ability to build an irregular triangulated mesh rather than a regular orthogonal grid, and the ability to re-map one of the original images exactly onto the reconstructed surface. The technique was applied to human nerve tissue to obtain 1,424 x 968-pixel, texture-mapped datasets, which were subsequently compared against 512 x 512-pixel AFM datasets from the same viewfields. Accounting for the inherent differences of the two techniques, direct comparison revealed an excellent visual match. The correspondence was also quantified by calculating the cross-correlation coefficient between corresponding altimetric profiles in SEM and AFM data, which consistently exceeded a figure of 0.9, with a rate of point mismatch in the order of 0.01%. Research is still underway to improve the robustness of the technique when applied to arbitrary images  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号