全文获取类型
收费全文 | 1945篇 |
免费 | 182篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 14篇 |
综合类 | 1篇 |
化学工业 | 426篇 |
金属工艺 | 21篇 |
机械仪表 | 32篇 |
建筑科学 | 70篇 |
矿业工程 | 2篇 |
能源动力 | 87篇 |
轻工业 | 345篇 |
水利工程 | 15篇 |
石油天然气 | 8篇 |
无线电 | 171篇 |
一般工业技术 | 373篇 |
冶金工业 | 67篇 |
原子能技术 | 10篇 |
自动化技术 | 486篇 |
出版年
2024年 | 6篇 |
2023年 | 24篇 |
2022年 | 75篇 |
2021年 | 102篇 |
2020年 | 56篇 |
2019年 | 92篇 |
2018年 | 112篇 |
2017年 | 85篇 |
2016年 | 95篇 |
2015年 | 72篇 |
2014年 | 117篇 |
2013年 | 159篇 |
2012年 | 149篇 |
2011年 | 184篇 |
2010年 | 119篇 |
2009年 | 98篇 |
2008年 | 99篇 |
2007年 | 85篇 |
2006年 | 82篇 |
2005年 | 55篇 |
2004年 | 56篇 |
2003年 | 39篇 |
2002年 | 32篇 |
2001年 | 13篇 |
2000年 | 13篇 |
1999年 | 16篇 |
1998年 | 21篇 |
1997年 | 17篇 |
1996年 | 6篇 |
1995年 | 10篇 |
1994年 | 11篇 |
1993年 | 5篇 |
1992年 | 2篇 |
1991年 | 2篇 |
1990年 | 1篇 |
1989年 | 2篇 |
1988年 | 1篇 |
1987年 | 2篇 |
1985年 | 1篇 |
1984年 | 1篇 |
1983年 | 1篇 |
1982年 | 1篇 |
1981年 | 2篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1975年 | 1篇 |
1974年 | 1篇 |
1971年 | 1篇 |
1970年 | 1篇 |
1969年 | 1篇 |
排序方式: 共有2128条查询结果,搜索用时 15 毫秒
51.
Pablo Irarrazaval Carlos Lizama Vicente Parot Carlos Sing-Long Cristian Tejos 《Computers & Mathematics with Applications》2011,62(3):1576-1590
The fractional Fourier transform (FrFT) is revisited in the framework of strongly continuous periodic semigroups to restate known results and to explore new properties of the FrFT. We then show how the FrFT can be used to reconstruct Magnetic Resonance (MR) images acquired under the presence of quadratic field inhomogeneity. Particularly, we prove that the order of the FrFT is a measure of the distortion in the reconstructed signal. Moreover, we give a dynamic interpretation to the order as time evolution of a function. We also introduce the notion of ρ-α space as an extension of the Fourier or k-space in MR, and we use it to study the distortions introduced in two common MR acquisition strategies. We formulate the reconstruction problem in the context of the FrFT and show how the semigroup theory allows us to find new reconstruction formulas for discrete sampled signals. Finally, the results are supplemented with numerical examples that show how it performs in a standard 1D MR signal reconstruction. 相似文献
52.
Pablo Cascón Andrés Ortiz Julio Ortega Antonio F. Díaz Ignacio Rojas 《The Journal of supercomputing》2011,58(3):302-313
Hosts with several, possibly heterogeneous and/or multicore, processors provide new challenges and opportunities to accelerate
applications with high communications bandwidth requirements. Many opportunities to scale these network applications with the increase in the link bandwidths are related to the exploitation of the available parallelism provided by the presence
of several processing cores in the servers, not only for computing the workload of the user application but also for decreasing
the overhead associated to the network interface and the system software. 相似文献
53.
Ney Sodr Pablo Guillermo Gonzales-Ormeo Helena Maria Petrilli Cludio Geraldo Schn 《Calphad》2009,33(3):576-583
The metastable phase diagram of the BCC-based ordering equilibria in the Fe–Al–Mo system has been calculated via a truncated cluster expansion, through the combination of Full-Potential-Linear augmented Plane Wave (FP-LAPW) electronic structure calculations and of Cluster Variation Method (CVM) thermodynamic calculations in the irregular tetrahedron approximation. Four isothermal sections at 1750 K, 2000 K, 2250 K and 2500 K are calculated and correlated with recently published experimental data on the system. The results confirm that the critical temperature for the order–disorder equilibrium between Fe3Al–D03 and FeAl–B2 is increased by Mo additions, while the critical temperature for the FeAl–B2/A2 equilibrium is kept approximately invariant with increasing Mo contents. The stabilization of the Al-rich A2 phase in equilibrium with overstoichiometric B2–(Fe,Mo)Al is also consistent with the attribution of the A2 structure to the τ2 phase, stable at high temperatures in overstoichiometric B2–FeAl. 相似文献
54.
BAIS: A Bayesian Artificial Immune System for the effective handling of building blocks 总被引:1,自引:0,他引:1
Significant progress has been made in theory and design of Artificial Immune Systems (AISs) for solving hard problems accurately. However, an aspect not yet widely addressed by the research reported in the literature is the lack of ability of the AISs to deal effectively with building blocks (partial high-quality solutions coded in the antibody). The available AISs present mechanisms for evolving the population that do not take into account the relationship among the variables of the problem, potentially causing the disruption of high-quality partial solutions. This paper proposes a novel AIS with abilities to identify and properly manipulate building blocks in optimization problems. Instead of using cloning and mutation to generate new individuals, our algorithm builds a probabilistic model representing the joint probability distribution of the promising solutions and, subsequently, uses this model for sampling new solutions. The probabilistic model used is a Bayesian network due to its capability of properly capturing the most relevant interactions among the variables. Therefore, our algorithm, called Bayesian Artificial Immune System (BAIS), represents a significant attempt to improve the performance of immune-inspired algorithms when dealing with building blocks, and hence to solve efficiently hard optimization problems with complex interactions among the variables. The performance of BAIS compares favorably with that produced by contenders such as state-of-the-art Estimation of Distribution Algorithms. 相似文献
55.
Javier Plaza Author Vitae Antonio Plaza Author VitaeAuthor Vitae Pablo Martinez Author Vitae 《Pattern recognition》2009,42(11):3032-3809
In this work, neural network-based models involved in hyperspectral image spectra separation are considered. Focus is on how to select the most highly informative samples for effectively training the neural architecture. This issue is addressed here by several new algorithms for intelligent selection of training samples: (1) a border-training algorithm (BTA) which selects training samples located in the vicinity of the hyperplanes that can optimally separate the classes; (2) a mixed-signature algorithm (MSA) which selects the most spectrally mixed pixels in the hyperspectral data as training samples; and (3) a morphological-erosion algorithm (MEA) which incorporates spatial information (via mathematical morphology concepts) to select spectrally mixed training samples located in spatially homogeneous regions. These algorithms, along with other standard techniques based on orthogonal projections and a simple Maximin-distance algorithm, are used to train a multi-layer perceptron (MLP), selected in this work as a representative neural architecture for spectral mixture analysis. Experimental results are provided using both a database of nonlinear mixed spectra with absolute ground truth and a set of real hyperspectral images, collected at different altitudes by the digital airborne imaging spectrometer (DAIS 7915) and reflective optics system imaging spectrometer (ROSIS) operating simultaneously at multiple spatial resolutions. 相似文献
56.
Pablo Sendín‐Raña Francisco J. González‐Castaño Enrique Pérez‐Barros Pedro S. Rodríguez‐Hernández Felipe Gil‐Castiñeira José M. Pousada‐Carballo 《Software》2009,39(3):279-298
For a long time, the design of relational databases has focused on the optimization of atomic transactions (insert, select, update or delete). Currently, relational databases store tactical information of data warehouses, mainly for select‐like operations. However, the database paradigm has evolved, and nowadays on‐line analytical processing (OLAP) systems handle strategic information for further analysis. These systems enable fast, interactive and consistent information analysis of data warehouses, including shared calculations and allocations. OLAP and data warehouses jointly allow multidimensional data views, turning raw data into knowledge. OLAP allows ‘slice and dice’ navigation and a top‐down perspective of data hierarchies. In this paper, we describe our experience in the migration from a large relational database management system to an OLAP system on top of a relational layer (the data warehouse), and the resulting contributions in open‐source ROLAP optimization. Existing open‐source ROLAP technologies rely on summarized tables with materialized aggregate views to improve system performance (in terms of response time). The design and maintenance of those tables are cumbersome. Instead, we intensively exploit cache memory, where key data reside, yielding low response times. A cold start process brings summarized data from the relational database to cache memory, subsequently reducing the response time. We ensure concurrent access to the summarized data, as well as consistency when the relational database updates data. We also improve the OLAP functionality, by providing new features for automating the creation of calculated members. This makes it possible to define new measures on the fly using virtual dimensions, without re‐designing the multidimensional cube. We have chosen the XML/A de facto standard for service provision. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
57.
Integrating different information sources is a growing research area within different application domains. This is particularly true for the geographic information domain which is facing new challenges because newer and better technologies are capturing large amounts of information about the Earth. This trend can be combined with increases in the distribution of GIS (Geographic Information Systems) on the Web, which is leading to the proliferation of different geospatial information repositories and the subsequent need to integrate information across repositories to get consistent information. To overcome this situation, many proposals use ontologies in the integration process. In this paper we analyze and compare the most widely referred proposals of geographic information integration, focusing on those using ontologies as semantic tools to represent the sources, and to facilitate the integration process. 相似文献
58.
Jorge Navarro-Ortiz Pablo Ameigeiras Juan J. Ramos-Munoz Juan M. Lopez-Soler 《Computer Communications》2009,32(11):1281-1297
In this paper we present a solution for the IEEE 802.11e HCCA (Hybrid coordination function Controlled Channel Access) mechanism which aims both at supporting strict real-time traffic requirements and, simultaneously, at handling TCP applications efficiently. Our proposal combines a packet scheduler and a dynamic resource allocation algorithm. The scheduling discipline is based on the Monolithic Shaper-Scheduler, which is a modification of a General Processor Sharing (GPS) related scheduler. It supports minimum-bandwidth and delay guarantees and, at the same time, it achieves the optimal latency of all the GPS-related schedulers. In addition, our innovative resource allocation procedure, called the territory method, aims at prioritizing real time services and at improving the performance of TCP applications. For this purpose, it splits the wireless channel capacity (in terms of transmission opportunities) into different territories for the different types of traffic, taking into account the end-to-end network dynamics. In order to give support to the desired applications, we consider the following traffic classes: conversational, streaming, interactive and best-effort. The so called territories shrink or expand depending on the current quality experienced by the corresponding traffic class. We evaluated the performance of our solution through extensive simulations in a heterogeneous wired-cum-wireless scenario under different traffic conditions. Additionally, we compare our proposal to other HCCA scheduling algorithms, the HCCA reference scheduler and Fair Hybrid Coordination Function (FHCF). The results show that the combination of the MSS and the territory method obtains higher system capacity for VoIP traffic (up to 32 users) in the simulated scenario, compared to FHCF and the HCCA reference scheduler (13 users). In addition, the MSS with the territory method also improves the throughput of TCP sources (one FTP application achieves between 6.1 Mbps without VoIP traffic and 2.1 Mbps with 20 VoIP users) compared to the reference scheduler (at most 388 kbps) and FHCF (with a maximum FTP throughput of 4.8 Mbps). 相似文献
59.
The quality of shadow mapping is traditionally limited by texture resolution. We present a novel lossless compression scheme for high‐resolution shadow maps based on precomputed multiresolution hierarchies. Traditional multiresolution trees can compactly represent homogeneous regions of shadow maps at coarser levels, but require many nodes for fine details. By conservatively adapting the depth map, we can significantly reduce the tree complexity. Our proposed method offers high compression rates, avoids quantization errors, exploits coherency along all data dimensions, and is well‐suited for GPU architectures. Our approach can be applied for coherent shadow maps as well, enabling several applications, including high‐quality soft shadows and dynamic lights moving on fixed‐trajectories. 相似文献
60.
Amador Durán David Benavides Sergio Segura Pablo Trinidad Antonio Ruiz-Cortés 《Software and Systems Modeling》2017,16(4):1049-1082
In a literature review on the last 20 years of automated analysis of feature models, the formalization of analysis operations was identified as the most relevant challenge in the field. This formalization could provide very valuable assets for tool developers such as a precise definition of the analysis operations and, what is more, a reference implementation, i.e., a trustworthy, not necessarily efficient implementation to compare different tools outputs. In this article, we present the FLAME framework as the result of facing this challenge. FLAME is a formal framework that can be used to formally specify not only feature models, but other variability modeling languages (VML s) as well. This reusability is achieved by its two-layered architecture. The abstract foundation layer is the bottom layer in which all VML-independent analysis operations and concepts are specified. On top of the foundation layer, a family of characteristic model layers—one for each VML to be formally specified—can be developed by redefining some abstract types and relations. The verification and validation of FLAME has followed a process in which formal verification has been performed traditionally by manual theorem proving, but validation has been performed by integrating our experience on metamorphic testing of variability analysis tools, something that has shown to be much more effective than manually designed test cases. To follow this automated, test-based validation approach, the specification of FLAME, written in Z, was translated into Prolog and 20,000 random tests were automatically generated and executed. Tests results helped to discover some inconsistencies not only in the formal specification, but also in the previous informal definitions of the analysis operations and in current analysis tools. After this process, the Prolog implementation of FLAME is being used as a reference implementation for some tool developers, some analysis operations have been formally specified for the first time with more generic semantics, and more VML s are being formally specified using FLAME. 相似文献