全文获取类型
收费全文 | 44611篇 |
免费 | 991篇 |
国内免费 | 185篇 |
专业分类
电工技术 | 548篇 |
综合类 | 643篇 |
化学工业 | 5116篇 |
金属工艺 | 622篇 |
机械仪表 | 906篇 |
建筑科学 | 957篇 |
矿业工程 | 392篇 |
能源动力 | 428篇 |
轻工业 | 2209篇 |
水利工程 | 619篇 |
石油天然气 | 51篇 |
武器工业 | 4篇 |
无线电 | 1937篇 |
一般工业技术 | 3782篇 |
冶金工业 | 21479篇 |
原子能技术 | 203篇 |
自动化技术 | 5891篇 |
出版年
2024年 | 55篇 |
2023年 | 215篇 |
2022年 | 165篇 |
2021年 | 168篇 |
2019年 | 94篇 |
2018年 | 490篇 |
2017年 | 711篇 |
2016年 | 1092篇 |
2015年 | 821篇 |
2014年 | 489篇 |
2013年 | 498篇 |
2012年 | 2214篇 |
2011年 | 2576篇 |
2010年 | 755篇 |
2009年 | 856篇 |
2008年 | 696篇 |
2007年 | 701篇 |
2006年 | 652篇 |
2005年 | 3405篇 |
2004年 | 2618篇 |
2003年 | 2105篇 |
2002年 | 893篇 |
2001年 | 782篇 |
2000年 | 322篇 |
1999年 | 660篇 |
1998年 | 6185篇 |
1997年 | 3841篇 |
1996年 | 2526篇 |
1995年 | 1473篇 |
1994年 | 1097篇 |
1993年 | 1114篇 |
1992年 | 255篇 |
1991年 | 327篇 |
1990年 | 320篇 |
1989年 | 295篇 |
1988年 | 302篇 |
1987年 | 229篇 |
1986年 | 203篇 |
1985年 | 174篇 |
1984年 | 80篇 |
1983年 | 91篇 |
1982年 | 128篇 |
1981年 | 177篇 |
1980年 | 194篇 |
1979年 | 63篇 |
1978年 | 104篇 |
1977年 | 612篇 |
1976年 | 1321篇 |
1975年 | 99篇 |
1971年 | 51篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
In this paper we propose a new circularity measure which defines the degree to which a shape differs from a perfect circle. The new measure is easy to compute and, being area based, is robust—e.g., with respect to noise or narrow intrusions. Also, it satisfies the following desirable properties:
- •
- it ranges over (0,1] and gives the measured circularity equal to 1 if and only if the measured shape is a circle;
- •
- it is invariant with respect to translations, rotations and scaling.
992.
Dimitra?Giannakopoulou David?H.?Bushnell Johann?SchumannEmail author Heinz?Erzberger Karen?Heere 《Annals of Mathematics and Artificial Intelligence》2011,63(1):5-30
In order to address the rapidly increasing load of air traffic operations, innovative algorithms and software systems must
be developed for the next generation air traffic control. Extensive verification of such novel algorithms is key for their
adoption by industry. Separation assurance algorithms aim at predicting if two aircraft will get closer to each other than
a minimum safe distance; if loss of separation is predicted, they also propose a change of course for the aircraft to resolve
this potential conflict. In this paper, we report on our work towards developing an advanced testing framework for separation
assurance. Our framework supports automated test case generation and testing, and defines test oracles that capture algorithm
requirements. We discuss three different approaches to test-case generation, their application to a separation assurance prototype,
and their respective strengths and weaknesses. We also present an approach for statistical analysis of the large numbers of
test results obtained from our framework. 相似文献
993.
This paper proposes a novel signal transformation and interpolation approach based on the modification of DCT (Discrete Cosine Transform). The proposed algorithm can be applied to any periodic or quasi periodic waveform for time scale and/or pitch modification purposes in addition to signal reconstruction, compression, coding and packet lost concealment. The proposed algorithm has two advantages:
(i)
Since DCT does not have the explicit phase information, one does not need the cubic spline interpolation of the phase component of the sinusoidal model. (ii)
The parameters to be interpolated can be reduced because of the energy packing efficiency of the DCT. This is particularly important if signal synthesis is carried out on a remote location from the transmitted parameters.
994.
This paper proposes a novel computer vision approach that processes video sequences of people walking and then recognises
those people by their gait. Human motion carries different information that can be analysed in various ways. The skeleton
carries motion information about human joints, and the silhouette carries information about boundary motion of the human body.
Moreover, binary and gray-level images contain different information about human movements. This work proposes to recover
these different kinds of information to interpret the global motion of the human body based on four different segmented image
models, using a fusion model to improve classification. Our proposed method considers the set of the segmented frames of each
individual as a distinct class and each frame as an object of this class. The methodology applies background extraction using
the Gaussian Mixture Model (GMM), a scale reduction based on the Wavelet Transform (WT) and feature extraction by Principal
Component Analysis (PCA). We propose four new schemas for motion information capture: the Silhouette-Gray-Wavelet model (SGW)
captures motion based on grey level variations; the Silhouette-Binary-Wavelet model (SBW) captures motion based on binary
information; the Silhouette–Edge-Binary model (SEW) captures motion based on edge information and the Silhouette Skeleton
Wavelet model (SSW) captures motion based on skeleton movement. The classification rates obtained separately from these four
different models are then merged using a new proposed fusion technique. The results suggest excellent performance in terms
of recognising people by their gait. 相似文献
995.
One of the main goals of an applied research field such as software engineering is the transfer and widespread use of research
results in industry. To impact industry, researchers developing technologies in academia need to provide tangible evidence
of the advantages of using them. This can be done trough step-wise validation, enabling researchers to gradually test and
evaluate technologies to finally try them in real settings with real users and applications. The evidence obtained, together
with detailed information on how the validation was conducted, offers rich decision support material for industry practitioners
seeking to adopt new technologies and researchers looking for an empirical basis on which to build new or refined technologies.
This paper presents model for evaluating the rigor and industrial relevance of technology evaluations in software engineering.
The model is applied and validated in a comprehensive systematic literature review of evaluations of requirements engineering
technologies published in software engineering journals. The aim is to show the applicability of the model and to characterize
how evaluations are carried out and reported to evaluate the state-of-research. The review shows that the model can be applied
to characterize evaluations in requirements engineering. The findings from applying the model also show that the majority
of technology evaluations in requirements engineering lack both industrial relevance and rigor. In addition, the research
field does not show any improvements in terms of industrial relevance over time. 相似文献
996.
Although the deterministic flow shop model is one of the most widely studied problems in scheduling theory, its stochastic
analog has remained a challenge. No computationally efficient optimization procedure exists even for the general two-machine
version. In this paper, we describe three heuristic procedures for the stochastic, two-machine flow shop problem and report
on computational experiments that compare their effectiveness. We focus on heuristic procedures that can be adapted for dispatching
without the need for computer simulation or computer-based search. We find that all three procedures are capable of quickly
generating solutions close to the best known sequences, which were obtained by extensive search. 相似文献
997.
Anne Martens Heiko Koziolek Lutz Prechelt Ralf Reussner 《Empirical Software Engineering》2011,16(5):587-622
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity. 相似文献
998.
Nobuyuki?KobayashiEmail author Tsubasa?Wago Yoshiki?Sugawara 《Multibody System Dynamics》2011,26(3):265-281
A method of reducing the system matrices of a planar flexible beam described by an absolute nodal coordinate formulation (ANCF)
is presented. In this method, we focus that the bending stiffness matrix expressed by adopting a continuum mechanics approach
to the ANCF beam element is constant when the axial strain is not very large. This feature allows to apply the Craig–Bampton
method to the equation of motion that is composed of the independent coordinates when the constraint forces are eliminated.
Four numerical examples that compare the proposed method and the conventional ANCF are demonstrated to verify the performance
and accuracy of the proposed method. From these examples, it is verified that the proposed method can describe the large deformation
effects such as dynamic stiffening due to the centrifugal force, as well as the conventional ANCF does. The use of this method
also reduces the computing time, while maintaining an acceptable degree of accuracy for the expression characteristics of
the conventional ANCF when the modal truncation number is adequately employed. This reduction in CPU time particularly pronounced
in the case of a large element number and small modal truncation number; the reduction can be verified not only in the case
of small deformation but also in the case of a fair bit large deformation. 相似文献
999.
Real-world time series have certain properties, such as stationarity, seasonality, linearity, among others, which determine
their underlying behaviour. There is a particular class of time series called long-memory processes, characterized by a persistent
temporal dependence between distant observations, that is, the time series values depend not only on recent past values but
also on observations of much prior time periods. The main purpose of this research is the development, application, and evaluation
of a computational intelligence method specifically tailored for long memory time series forecasting, with emphasis on many-step-ahead
prediction. The method proposed here is a hybrid combining genetic programming and the fractionally integrated (long-memory)
component of autoregressive fractionally integrated moving average (ARFIMA) models. Another objective of this study is the
discovery of useful comprehensible novel knowledge, represented as time series predictive models. In this respect, a new evolutionary
multi-objective search method is proposed to limit complexity of evolved solutions and to improve predictive quality. Using
these methods allows for obtaining lower complexity (and possibly more comprehensible) models with high predictive quality,
keeping run time and memory requirements low, and avoiding bloat and over-fitting. The methods are assessed on five real-world
long memory time series and their performance is compared to that of statistical models reported in the literature. Experimental
results show the proposed methods’ advantages in long memory time series forecasting. 相似文献
1000.
Alberto Pardo Jo?o Paulo Fernandes Jo?o Saraiva 《Higher-Order and Symbolic Computation》2011,24(1-2):115-149
Functional programs often combine separate parts using intermediate data structures for communicating results. Programs so defined are modular, easier to understand and maintain, but suffer from inefficiencies due to the generation of those gluing data structures. To eliminate such redundant data structures, some program transformation techniques have been proposed. One such technique is shortcut fusion, and has been studied in the context of both pure and monadic functional programs. In this paper, we study several shortcut fusion extensions, so that, alternatively, circular or higher-order programs are derived. These extensions are also provided for effect-free programs and monadic ones. Our work results in a set of generic calculation rules, that are widely applicable, and whose correctness is formally established. 相似文献