首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   33119篇
  免费   2134篇
  国内免费   31篇
电工技术   241篇
综合类   18篇
化学工业   6507篇
金属工艺   627篇
机械仪表   696篇
建筑科学   960篇
矿业工程   132篇
能源动力   559篇
轻工业   5443篇
水利工程   312篇
石油天然气   125篇
武器工业   6篇
无线电   1296篇
一般工业技术   5669篇
冶金工业   8009篇
原子能技术   117篇
自动化技术   4567篇
  2024年   59篇
  2023年   288篇
  2022年   301篇
  2021年   751篇
  2020年   668篇
  2019年   745篇
  2018年   1347篇
  2017年   1307篇
  2016年   1360篇
  2015年   1048篇
  2014年   1229篇
  2013年   2623篇
  2012年   1945篇
  2011年   1741篇
  2010年   1460篇
  2009年   1308篇
  2008年   1280篇
  2007年   1194篇
  2006年   812篇
  2005年   704篇
  2004年   654篇
  2003年   588篇
  2002年   578篇
  2001年   418篇
  2000年   394篇
  1999年   441篇
  1998年   2368篇
  1997年   1617篇
  1996年   1025篇
  1995年   618篇
  1994年   474篇
  1993年   565篇
  1992年   209篇
  1991年   209篇
  1990年   155篇
  1989年   156篇
  1988年   162篇
  1987年   140篇
  1986年   120篇
  1985年   143篇
  1984年   129篇
  1983年   98篇
  1982年   129篇
  1981年   147篇
  1980年   157篇
  1979年   79篇
  1978年   81篇
  1977年   278篇
  1976年   621篇
  1973年   65篇
排序方式: 共有10000条查询结果,搜索用时 664 毫秒
961.
In this paper we address several issues arising from a singularly perturbed fourth order problem with small parameter ε. First, we introduce a new family of non-conforming elements. We then prove that the corresponding finite element method is robust with respect to the parameter ε and uniformly convergent to order h 1/2. In addition, we analyze the effect of treating the Neumann boundary condition weakly by Nitsche’s method. We show that such treatment is superior when the parameter ε is smaller than the mesh size h and obtain sharper error estimates. Such error analysis is not restricted to the proposed elements and can easily be carried out to other elements as long as the Neumann boundary condition is imposed weakly. Finally, we discuss the local error estimates and the pollution effect of the boundary layers in the interior of the domain.  相似文献   
962.
Free-Form Deformation Techniques (FFD) are commonly used to generate animations, where a polygonal approximation of the final object suffices for visualization purposes. However, for some CAD/CAM applications, we need an explicit expression of the object, rather than a collection of sampled points. If both object and deformation are polynomial, their composition yields a result that is also polynomial, albeit very high degree, something undesirable in real applications. To solve this problem, we transform each curve or surface composing the object, usually expressed in the Bernstein basis, to a modified Newton form. In this representation, the two-point analogue of Taylor expansions, the composition admits a simple expression in terms of discrete convolutions, and degree reduction corresponding to Hermite approximation is trivial by dropping high-degree coefficients. Furthermore, degree-reduction can be incorporated into the composition. Finally, the deformed curve or surface is converted back to the Bernstein form. This method extends to general non-polynomial deformation, such as bending and twisting, by computing a polynomial approximant of the deformation.  相似文献   
963.
964.
We provide a Mathematica code for decomposing strongly correlated quantum states described by a first-quantized, analytical wave function into many-body Fock states. Within them, the single-particle occupations refer to the subset of Fock–Darwin functions with no nodes. Such states, commonly appearing in two-dimensional systems subjected to gauge fields, were first discussed in the context of quantum Hall physics and are nowadays very relevant in the field of ultracold quantum gases. As important examples, we explicitly apply our decomposition scheme to the prominent Laughlin and Pfaffian states. This allows for easily calculating the overlap between arbitrary states with these highly correlated test states, and thus provides a useful tool to classify correlated quantum systems. Furthermore, we can directly read off the angular momentum distribution of a state from its decomposition. Finally we make use of our code to calculate the normalization factors for Laughlin?s famous quasi-particle/quasi-hole excitations, from which we gain insight into the intriguing fractional behavior of these excitations.Program summaryProgram title: StrongdecoCatalogue identifier: AELA_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AELA_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 5475No. of bytes in distributed program, including test data, etc.: 31 071Distribution format: tar.gzProgramming language: MathematicaComputer: Any computer on which Mathematica can be installedOperating system: Linux, Windows, MacClassification: 2.9Nature of problem: Analysis of strongly correlated quantum states.Solution method: The program makes use of the tools developed in Mathematica to deal with multivariate polynomials to decompose analytical strongly correlated states of bosons and fermions into a standard many-body basis. Operations with polynomials, determinants and permanents are the basic tools.Running time: The distributed notebook takes a couple of minutes to run.  相似文献   
965.
Privacy preserving algorithms allow several participants to compute a global function collaboratively without revealing local information to each other. Examples of applications include trust management, collaborative filtering, and ranking algorithms such as PageRank. Most solutions that can be proven to be privacy preserving theoretically are not appropriate for highly unreliable, large scale, distributed environments such as peer-to-peer (P2P) networks because they either require centralized components, or a high degree of synchronism among the participants. At the same time, in P2P networks privacy preservation is becoming a key requirement. Here, we propose an asynchronous privacy preserving communication layer for an important class of iterative computations in P2P networks, where each peer periodically computes a linear combination of data stored at its neighbors. Our algorithm tolerates realistic rates of message drop and delay, and node churn, and has a low communication overhead. We perform simulation experiments to compare our algorithm to related work. The problem we use as an example is power iteration (a method used to calculate the dominant eigenvector of a matrix), since eigenvector computation is at the core of several practical applications. We demonstrate that our novel algorithm also converges in the presence of realistic node churn, message drop rates and message delay, even when previous synchronized solutions are able to make almost no progress.  相似文献   
966.
In this paper we present TangiWheel,a collection manipulation widget for tabletop displays.Our implementation is flexible,allowing either multi-touch or interaction,or even a hybrid scheme to better suit user choice and convenience.Different TangiWheel aspects and features are compared with other existing widgets for collection manipulation.The study reveals that TangiWheel is the first proposal to support a hybrid input modality with large resemblance levels between touch and tangible interaction styles.Several experiments were conducted to evaluate the techniques used in each input scheme for a better understanding of tangible surface interfaces in complex tasks performed by a single user (e.g.,involving a typical master-slave exploration pattern).The results show that tangibles perform significantly better than fingers,despite dealing with a greater number of interactions,in situations that require a large number of acquisitions and basic manipulation tasks such as establishing location and orientation.However,when users have to perform multiple exploration and selection operations that do not require previous basic manipulation tasks,for instance when collections are fixed in the interface layout,touch input is significantly better in terms of required time and number of actions.Finally,when a more elastic collection layout or more complex additional insertion or displacement operations are needed,the hybrid and tangible approaches clearly outperform finger-based interactions.  相似文献   
967.
Several areas of knowledge are being benefited with the reduction of the computing time by using the technology of graphics processing units (GPU) and the compute unified device architecture (CUDA) platform. In case of evolutionary algorithms, which are inherently parallel, this technology may be advantageous for running experiments demanding high computing time. In this paper, we provide an implementation of a co-evolutionary differential evolution (DE) algorithm in C-CUDA for solving min–max problems. The algorithm was tested on a suite of well-known benchmark optimization problems and the computing time has been compared with the same algorithm implemented in C. Results demonstrate that the computing time can significantly be reduced and scalability is improved using C-CUDA. As far as we know, this is the first implementation of a co-evolutionary DE algorithm in C-CUDA.  相似文献   
968.
969.
Nowadays, especially after the recent financial downturn, companies are looking for much more efficient and creative business processes. They need to place better solutions in the market in a less time with less cost. There is a general intuition that communication and collaboration, especially mixed with Web 2.0 approach within companies and ecosystems, can boost the innovation process with positive impacts on business indicators. Open Innovation within an Enterprise 2.0 context is a one of the most chosen paradigm for improving the innovation processes of enterprises, based on the collaborative creation and development of ideas and products. The key feature of this new paradigm is that the knowledge is exploited in a collaborative way flowing not only among internal sources, i.e. R&D departments, but also among external ones as other employees, customers, partners, etc. In this paper we show how an ontology-based analysis of plain text can provide a semantic contextualization of content support tasks, such as finding semantic distance between contents, and can help in creating relations between people with shared knowledge and interests. Along this paper we will present the results obtained by the adoption of this technology in a large corporate environment like Bankinter, a financial institution, Telefonica I+D, an international telecommunication firm and Repsol, a major oil company in Spain.  相似文献   
970.
Estimation of the semantic likeness between words is of great importance in many applications dealing with textual data such as natural language processing, knowledge acquisition and information retrieval. Semantic similarity measures exploit knowledge sources as the base to perform the estimations. In recent years, ontologies have grown in interest thanks to global initiatives such as the Semantic Web, offering an structured knowledge representation. Thanks to the possibilities that ontologies enable regarding semantic interpretation of terms many ontology-based similarity measures have been developed. According to the principle in which those measures base the similarity assessment and the way in which ontologies are exploited or complemented with other sources several families of measures can be identified. In this paper, we survey and classify most of the ontology-based approaches developed in order to evaluate their advantages and limitations and compare their expected performance both from theoretical and practical points of view. We also present a new ontology-based measure relying on the exploitation of taxonomical features. The evaluation and comparison of our approach’s results against those reported by related works under a common framework suggest that our measure provides a high accuracy without some of the limitations observed in other works.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号