全文获取类型
收费全文 | 1155篇 |
免费 | 77篇 |
国内免费 | 3篇 |
专业分类
电工技术 | 20篇 |
化学工业 | 324篇 |
金属工艺 | 12篇 |
机械仪表 | 29篇 |
建筑科学 | 48篇 |
矿业工程 | 2篇 |
能源动力 | 41篇 |
轻工业 | 244篇 |
水利工程 | 14篇 |
石油天然气 | 14篇 |
无线电 | 59篇 |
一般工业技术 | 165篇 |
冶金工业 | 96篇 |
原子能技术 | 18篇 |
自动化技术 | 149篇 |
出版年
2024年 | 4篇 |
2023年 | 16篇 |
2022年 | 40篇 |
2021年 | 78篇 |
2020年 | 36篇 |
2019年 | 55篇 |
2018年 | 48篇 |
2017年 | 45篇 |
2016年 | 50篇 |
2015年 | 27篇 |
2014年 | 55篇 |
2013年 | 102篇 |
2012年 | 78篇 |
2011年 | 104篇 |
2010年 | 60篇 |
2009年 | 61篇 |
2008年 | 51篇 |
2007年 | 45篇 |
2006年 | 47篇 |
2005年 | 27篇 |
2004年 | 17篇 |
2003年 | 16篇 |
2002年 | 14篇 |
2001年 | 5篇 |
2000年 | 11篇 |
1999年 | 10篇 |
1998年 | 28篇 |
1997年 | 16篇 |
1996年 | 16篇 |
1995年 | 9篇 |
1994年 | 9篇 |
1993年 | 9篇 |
1992年 | 6篇 |
1990年 | 3篇 |
1989年 | 4篇 |
1988年 | 4篇 |
1987年 | 4篇 |
1986年 | 1篇 |
1985年 | 4篇 |
1984年 | 1篇 |
1983年 | 1篇 |
1982年 | 1篇 |
1980年 | 1篇 |
1979年 | 2篇 |
1977年 | 1篇 |
1976年 | 4篇 |
1973年 | 3篇 |
1972年 | 2篇 |
1968年 | 2篇 |
1967年 | 1篇 |
排序方式: 共有1235条查询结果,搜索用时 15 毫秒
21.
Paulo Anselmo da Mota Silveira Neto Ivan do Carmo Machado John D. McGregorEduardo Santana de Almeida Silvio Romero de Lemos Meira 《Information and Software Technology》2011,53(5):407-423
Context
In software development, Testing is an important mechanism both to identify defects and assure that completed products work as specified. This is a common practice in single-system development, and continues to hold in Software Product Lines (SPL). Even though extensive research has been done in the SPL Testing field, it is necessary to assess the current state of research and practice, in order to provide practitioners with evidence that enable fostering its further development.Objective
This paper focuses on Testing in SPL and has the following goals: investigate state-of-the-art testing practices, synthesize available evidence, and identify gaps between required techniques and existing approaches, available in the literature.Method
A systematic mapping study was conducted with a set of nine research questions, in which 120 studies, dated from 1993 to 2009, were evaluated.Results
Although several aspects regarding testing have been covered by single-system development approaches, many cannot be directly applied in the SPL context due to specific issues. In addition, particular aspects regarding SPL are not covered by the existing SPL approaches, and when the aspects are covered, the literature just gives brief overviews. This scenario indicates that additional investigation, empirical and practical, should be performed.Conclusion
The results can help to understand the needs in SPL Testing, by identifying points that still require additional investigation, since important aspects regarding particular points of software product lines have not been addressed yet. 相似文献22.
Jesús P. Mena-Chalco Ives Macêdo Luiz Velho Roberto M. Cesar Jr. 《The Visual computer》2009,25(10):899-909
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system. 相似文献
23.
The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great
effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately,
little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the
bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The
key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original
domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration
using downsampling in space and intensity. This affords a principled expression of accuracy in terms of bandwidth and sampling.
The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities.
This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral
filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically,
we are able to process a 2 megapixel image using our acceleration technique in less than a second, and have the result be
visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large
spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering. 相似文献
24.
25.
New, simple, proofs of soundness (every representable function lies in a given complexity class) for Elementary Affine Logic, LFPL and Soft Affine Logic are presented. The proofs are obtained by instantiating a semantic framework previously introduced by the authors and based on an innovative modification of realizability. The proof is a notable simplification on the original already semantic proof of soundness for the above mentioned logical systems and programming languages. A new result made possible by the semantic framework is the addition of polymorphism and a modality to LFPL, thus allowing for an internal definition of inductive datatypes. The methodology presented proceeds by assigning both abstract resource bounds in the form of elements from a resource monoid and resource-bounded computations to proofs (respectively, programs). 相似文献
26.
Luiz Marcio Cysneiros Julio Cesar Sampaio do Prado Leite Jaime de Melo Sabat Neto 《Requirements Engineering》2001,6(2):97-115
The development of complex information systems calls for conceptual models that describe aspects beyond entities and activities.
In particular, recent research has pointed out that conceptual models need to model goals, in order to capture the intentions
which underlie complex situations within an organisational context. This paper focuses on one class of goals, namely non-functional
requirements (NFR), which need to be captured and analysed from the very early phases of the software development process.
The paper presents a framework for integrating NFRs into the ER and OO models. This framework has been validated by two case
studies, one of which is very large. The results of the case studies suggest that goal modelling during early phases can lead
to a more productive and complete modelling activity. 相似文献
27.
BCI Meeting 2005--workshop on signals and recording methods. 总被引:4,自引:0,他引:4
Jonathan R Wolpaw Gerald E Loeb Brendan Z Allison Emanuel Donchin Omar Feix do Nascimento William J Heetderks Femke Nijboer William G Shain James N Turner 《IEEE transactions on neural systems and rehabilitation engineering》2006,14(2):138-141
This paper describes the highlights of presentations and discussions during the Third International BCI Meeting in a workshop that evaluated potential brain-computer interface (BCI) signals and currently available recording methods. It defined the main potential user populations and their needs, addressed the relative advantages and disadvantages of noninvasive and implanted (i.e., invasive) methodologies, considered ethical issues, and focused on the challenges involved in translating BCI systems from the laboratory to widespread clinical use. The workshop stressed the critical importance of developing useful applications that establish the practical value of BCI technology. 相似文献
28.
Nair do Amaral Sampaio Neta José Cleiton Sousa dos Santos Soraya de Oliveira Sancho Sueli Rodrigues Luciana Rocha Barros Gonçalves Ligia R. Rodrigues José A. Teixeira 《Food Hydrocolloids》2012
Sugar esters are compounds with surfactant properties (biosurfactants), i.e., capable of reducing the surface tension and promote the emulsification of immiscible liquids. On the other hand, as with all emulsions, coconut milk is not physically stable and is prone to phase separation. Therefore, the aim of this work was to evaluate the synthesis of fructose, sucrose and lactose esters from the corresponding sugars using Candida antarctica type B lipase immobilized in two different supports, namely acrylic resin and chitosan, and evaluate its application in the stabilization of coconut milk emulsions. The enzyme immobilized on chitosan showed the highest yield of lactose ester production (84.1%). Additionally, the production of fructose ester was found to be higher for the enzyme immobilized on the acrylic resin support (74.3%) as compared with the one immobilized on chitosan (70.1%). The same trend was observed for the sucrose ester, although with lower percentage yields. Sugar esters were then added to samples of fresh coconut milk and characterized according to their surface tension, emulsification index and particle size distribution. Although the microscopic analysis showed similar results for all sugar esters, results indicated lactose ester as the best biosurfactant, with a surface tension of 38.0 N/m and an emulsification index of 54.1%, when used in a ratio of 1:10 (biosurfactant:coconut milk, v/v) for 48 h experiments. 相似文献
29.
30.
João Paulo Gois Diogo Fernando Trevisan Harlen Costa Batagelo Ives Macêdo 《The Visual computer》2013,29(6-8):651-661
In this work we investigate a generalized interpolation approach using radial basis functions to reconstruct implicit surfaces from polygonal meshes. With this method, the user can define with great flexibility three sets of constraint interpolants: points, normals, and tangents; allowing to balance computational complexity, precision, and feature modeling. Furthermore, this flexibility makes possible to avoid untrustworthy information, such as normals estimated on triangles with bad aspect ratio. We present results of the method for applications related to the problem of modeling 2D curves from polygons and 3D surfaces from polygonal meshes. We also apply the method to problems involving subdivision surfaces and front-tracking of moving boundaries. Finally, as our technique generalizes the recently proposed HRBF Implicits technique, comparisons with this approach are also conducted. 相似文献