全文获取类型
收费全文 | 2877篇 |
免费 | 213篇 |
国内免费 | 2篇 |
专业分类
电工技术 | 13篇 |
综合类 | 1篇 |
化学工业 | 724篇 |
金属工艺 | 53篇 |
机械仪表 | 77篇 |
建筑科学 | 92篇 |
矿业工程 | 6篇 |
能源动力 | 117篇 |
轻工业 | 535篇 |
水利工程 | 31篇 |
石油天然气 | 13篇 |
无线电 | 242篇 |
一般工业技术 | 469篇 |
冶金工业 | 196篇 |
原子能技术 | 17篇 |
自动化技术 | 506篇 |
出版年
2024年 | 8篇 |
2023年 | 30篇 |
2022年 | 105篇 |
2021年 | 153篇 |
2020年 | 93篇 |
2019年 | 98篇 |
2018年 | 128篇 |
2017年 | 110篇 |
2016年 | 116篇 |
2015年 | 94篇 |
2014年 | 123篇 |
2013年 | 220篇 |
2012年 | 198篇 |
2011年 | 239篇 |
2010年 | 166篇 |
2009年 | 141篇 |
2008年 | 167篇 |
2007年 | 137篇 |
2006年 | 77篇 |
2005年 | 86篇 |
2004年 | 66篇 |
2003年 | 55篇 |
2002年 | 53篇 |
2001年 | 28篇 |
2000年 | 30篇 |
1999年 | 32篇 |
1998年 | 84篇 |
1997年 | 50篇 |
1996年 | 37篇 |
1995年 | 25篇 |
1994年 | 23篇 |
1993年 | 19篇 |
1992年 | 9篇 |
1991年 | 7篇 |
1990年 | 10篇 |
1989年 | 7篇 |
1988年 | 11篇 |
1987年 | 7篇 |
1986年 | 7篇 |
1985年 | 6篇 |
1983年 | 2篇 |
1982年 | 5篇 |
1981年 | 2篇 |
1980年 | 2篇 |
1979年 | 6篇 |
1978年 | 5篇 |
1977年 | 5篇 |
1976年 | 2篇 |
1975年 | 2篇 |
1964年 | 1篇 |
排序方式: 共有3092条查询结果,搜索用时 31 毫秒
71.
Giacomo Bucci Laura Carnevali Lorenzo Ridi Enrico Vicario 《International Journal on Software Tools for Technology Transfer (STTT)》2010,12(5):391-403
Oris is a tool for qualitative verification and quantitative evaluation of reactive timed systems, which supports modeling
and analysis of various classes of timed extensions of Petri Nets. As most characterizing features, Oris implements symbolic
state space analysis of preemptive Time Petri Nets, which enable schedulability analysis of real-time systems running under
priority preemptive scheduling; and stochastic Time Petri Nets, which enable an integrated approach to qualitative verification
and quantitative evaluation. In this paper, we present the current version of the tool and we illustrate its application to
two different case studies in the areas of qualitative verification and quantitative evaluation, respectively. 相似文献
72.
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. 相似文献
73.
Carina F. Dorneles Marcos Freitas Nunes Carlos A. Heuser Viviane P. Moreira Altigran S. da Silva Edleno S. de Moura 《Information Systems》2009,34(8):673
Approximate data matching aims at assessing whether two distinct instances of data represent the same real-world object. The comparison between data values is usually done by applying a similarity function which returns a similarity score. If this score surpasses a given threshold, both data instances are considered as representing the same real-world object. These score values depend on the algorithm that implements the function and have no meaning to the user. In addition, score values generated by different functions are not comparable. This will potentially lead to problems when the scores returned by different similarity functions need to be combined for computing the similarity between records. In this article, we propose that thresholds should be defined in terms of the precision that is expected from the matching process rather than in terms of the raw scores returned by the similarity function. Precision is a widely known similarity metric and has a clear interpretation from the user's point of view. Our approach defines mappings from score values to precision values, which we call adjusted scores. In order to obtain such mappings, our approach requires training over a small dataset. Experiments show that training can be reused for different datasets on the same domain. Our results also demonstrate that existing methods for combining scores for computing the similarity between records may be enhanced if adjusted scores are used. 相似文献
74.
The role of spectral resolution and classifier complexity in the analysis of hyperspectral images of forest areas 总被引:1,自引:0,他引:1
Michele Dalponte Lorenzo Bruzzone Loris Vescovo 《Remote sensing of environment》2009,113(11):2345-2355
Remote sensing hyperspectral sensors are important and powerful instruments for addressing classification problems in complex forest scenarios, as they allow one a detailed characterization of the spectral behavior of the considered information classes. However, the processing of hyperspectral data is particularly complex both from a theoretical viewpoint [e.g. problems related to the Hughes phenomenon (Hughes, 1968) and from a computational perspective. Despite many previous investigations that have been presented in the literature on feature reduction and feature extraction in hyperspectral data, only a few studies have analyzed the role of spectral resolution on the classification accuracy in different application domains. In this paper, we present an empirical study aimed at understanding the relationship among spectral resolution, classifier complexity, and classification accuracy obtained with hyperspectral sensors for the classification of forest areas. We considered two different test sets characterized by images acquired by an AISA Eagle sensor over 126 bands with a spectral resolution of 4.6 nm, and we subsequently degraded its spectral resolution to 9.2, 13.8, 18.4, 23, 27.6, 32.2 and 36.8 nm. A series of classification experiments were carried out with bands at each of the degraded spectral resolutions, and bands selected with a feature selection algorithm at the highest spectral resolution (4.6 nm). The classification experiments were carried out with three different classifiers: Support Vector Machine, Gaussian Maximum Likelihood with Leave-One-Out-Covariance estimator, and Linear Discriminant Analysis. From the experimental results, important conclusions can be made about the choice of the spectral resolution of hyperspectral sensors as applied to forest areas, also in relation to the complexity of the adopted classification methodology. The outcome of these experiments are also applicable in terms of directing the user towards a more efficient use of the current instruments (e.g. programming of the spectral channels to be acquired) and classification techniques in forest applications, as well as in the design of future hyperspectral sensors. 相似文献
75.
Dynamic SLAs management in service oriented environments 总被引:1,自引:0,他引:1
Giuseppe Di Modica Author Vitae Author Vitae Lorenzo Vita Author Vitae 《Journal of Systems and Software》2009,82(5):759-771
The increasing adoption of service oriented architectures across different administrative domains, forces service providers to use effective mechanisms and strategies of resource management in order for them to be able to guarantee the quality levels their customers demands during service provisioning. Service level agreements (SLA) are the most common mechanism used to establish agreements on the quality of a service (QoS) between a service provider and a service consumer. The WS-Agreement specification, developed by the Open Grid Forum, is a Web Service protocol to establish agreements on the QoS level to be guaranteed in the provision of a service. The committed agreement cannot be modified during service provision and is effective until all activities pertaining to it are finished or until one of the signing party decides to terminate it. In B2B scenarios where several service providers are involved in the composition of a service, and each of them plays both the parts of provider and customer, several one-to-one SLAs need to be signed. In such a rigid context the global QoS of the final service can be strongly affected by any violation on each single SLA. In order to prevent such violations, SLAs need to adapt to any possible needs that might come up during service provision. In this work we focus on the WS-Agreement specification and propose to enhance the flexibility of its approach. We integrate new functionality to the protocol that enable the parties of a WS-Agreement to re-negotiate and modify its terms during the service provision, and show how a typical scenario of service composition can benefit from our proposal. 相似文献
76.
We propose FMJ (Featherweight Multi Java), an extension of Featherweight Java with encapsulated multi-methods thus providing dynamic overloading. Multi-methods (collections of overloaded methods associated to the same message, whose selection takes place dynamically instead of statically as in standard overloading) are a useful and flexible mechanism which enhances re-usability and separation of responsibilities. However, many mainstream languages, such as, e.g., Java, do not provide it, resorting to only static overloading.The proposed extension is conservative and type safe: both “message-not-understood” and “message-ambiguous” are statically ruled out. Possible ambiguities are checked during type checking only on method invocation expressions, without requiring to inspect all the classes of a program. A static annotation with type information guarantees that in a well-typed program no ambiguity can arise at run-time. This annotation mechanism also permits modeling static overloading in a smooth way.Our core language can be used as the formal basis for an actual implementation of dynamic (and static) overloading in Java-like languages. 相似文献
77.
Several Grids have been established and used for varying science applications during the last years. Most of these Grids, however, work in isolation and with different utilisation levels. Previous work has introduced an architecture and a mechanism to enable resource sharing amongst Grids. It has demonstrated that there can be benefits for a Grid to offload requests or provide spare resources to another Grid. In this work, we address the problem of resource provisioning to Grid applications in multiple-Grid environments. The provisioning is carried out based on availability information obtained from queueing-based resource management systems deployed at the provider sites which are the participants of the Grids. We evaluate the performance of different allocation policies. In contrast to existing work on load sharing across Grids, the policies described here take into account the local load of resource providers, imprecise availability information and the compensation of providers for the resources offered to the Grid. In addition, we evaluate these policies along with a mechanism that allows resource sharing amongst Grids. Experimental results obtained through simulation show that the mechanism and policies are effective in redirecting requests thus improving the applications’ average weighted response time. 相似文献
78.
Marcos Sandim Douglas Cedrim Luis Gustavo Nonato Paulo Pagliosa Afonso Paiva 《Computer Graphics Forum》2016,35(2):215-224
This paper presents a novel method to detect free‐surfaces on particle‐based volume representation. In contrast to most particle‐based free‐surface detection methods, which perform the surface identification based on physical and geometrical properties derived from the underlying fluid flow simulation, the proposed approach only demands the spatial location of the particles to properly recognize surface particles, avoiding even the use of kernels. Boundary particles are identified through a Hidden Point Removal (HPR) operator used for visibility test. Our method is very simple, fast, easy to implement and robust to changes in the distribution of particles, even when facing large deformation of the free‐surface. A set of comparisons against state‐of‐the‐art boundary detection methods show the effectiveness of our approach. The good performance of our method is also attested in the context of fluid flow simulation involving free‐surface, mainly when using level‐sets for rendering purposes. 相似文献
79.
Alejandro Rago Claudia Marcos J. Andres Diaz-Pace 《Automated Software Engineering》2016,23(2):219-252
Textual requirements are very common in software projects. However, this format of requirements often keeps relevant concerns (e.g., performance, synchronization, data access, etc.) from the analyst’s view because their semantics are implicit in the text. Thus, analysts must carefully review requirements documents in order to identify key concerns and their effects. Concern mining tools based on NLP techniques can help in this activity. Nonetheless, existing tools cannot always detect all the crosscutting effects of a given concern on different requirements sections, as this detection requires a semantic analysis of the text. In this work, we describe an automated tool called REAssistant that supports the extraction of semantic information from textual use cases in order to reveal latent crosscutting concerns. To enable the analysis of use cases, we apply a tandem of advanced NLP techniques (e.g, dependency parsing, semantic role labeling, and domain actions) built on the UIMA framework, which generates different annotations for the use cases. Then, REAssistant allows analysts to query these annotations via concern-specific rules in order to identify all the effects of a given concern. The REAssistant tool has been evaluated with several case-studies, showing good results when compared to a manual identification of concerns and a third-party tool. In particular, the tool achieved a remarkable recall regarding the detection of crosscutting concern effects. 相似文献
80.
In this paper we present a new thermographic image database, suitable for the analysis of automatic focusing measures. This database contains the images of 10 scenes, each of which is represented once for each of 96 different focus positions. Using this database, we evaluate the usefulness of five focus measures with the goal of determining the optimal focus position. Experimental results reveal that the accurate automatic detection of optimal focus position can be achieved with a low computational burden. We also present an acquisition tool for obtaining thermal images. To the best of our knowledge, this is the first study on the automatic focusing of thermal images. 相似文献