首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3163篇
  免费   133篇
  国内免费   4篇
电工技术   41篇
综合类   5篇
化学工业   557篇
金属工艺   32篇
机械仪表   36篇
建筑科学   161篇
矿业工程   4篇
能源动力   104篇
轻工业   377篇
水利工程   15篇
石油天然气   6篇
无线电   274篇
一般工业技术   521篇
冶金工业   559篇
原子能技术   23篇
自动化技术   585篇
  2023年   23篇
  2022年   16篇
  2021年   62篇
  2020年   40篇
  2019年   68篇
  2018年   66篇
  2017年   68篇
  2016年   56篇
  2015年   73篇
  2014年   93篇
  2013年   193篇
  2012年   135篇
  2011年   200篇
  2010年   176篇
  2009年   142篇
  2008年   167篇
  2007年   153篇
  2006年   137篇
  2005年   120篇
  2004年   102篇
  2003年   105篇
  2002年   94篇
  2001年   58篇
  2000年   54篇
  1999年   72篇
  1998年   141篇
  1997年   92篇
  1996年   75篇
  1995年   52篇
  1994年   55篇
  1993年   52篇
  1992年   24篇
  1991年   21篇
  1990年   17篇
  1989年   21篇
  1988年   17篇
  1987年   22篇
  1986年   22篇
  1985年   25篇
  1984年   13篇
  1983年   10篇
  1982年   16篇
  1981年   16篇
  1979年   21篇
  1978年   10篇
  1977年   13篇
  1976年   24篇
  1975年   16篇
  1974年   10篇
  1971年   8篇
排序方式: 共有3300条查询结果,搜索用时 31 毫秒
81.
In this paper we present FeynRules, a new Mathematica package that facilitates the implementation of new particle physics models. After the user implements the basic model information (e.g., particle content, parameters and Lagrangian), FeynRules derives the Feynman rules and stores them in a generic form suitable for translation to any Feynman diagram calculation program. The model can then be translated to the format specific to a particular Feynman diagram calculator via FeynRules translation interfaces. Such interfaces have been written for CalcHEP/CompHEP, FeynArts/FormCalc, MadGraph/MadEvent and Sherpa, making it possible to write a new model once and have it work in all of these programs. In this paper, we describe how to implement a new model, generate the Feynman rules, use a generic translation interface, and write a new translation interface. We also discuss the details of the FeynRules code.

Program summary

Program title: FeynRulesCatalogue identifier: AEDI_v1_0Program summary URL::http://cpc.cs.qub.ac.uk/summaries/AEDI_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 15 980No. of bytes in distributed program, including test data, etc.: 137 383Distribution format: tar.gzProgramming language: MathematicaComputer: Platforms on which Mathematica is availableOperating system: Operating systems on which Mathematica is availableClassification: 11.1, 11.2, 11.6Nature of problem: Automatic derivation of Feynman rules from a Lagrangian. Implementation of new models into Monte Carlo event generators and FeynArts.Solution method: FeynRules works in two steps:
1. derivation of the Feynman rules directly form the Lagrangian using canonical commutation relations among fields and creation operators.
2. implementation of the new physics model into FeynArts as well as various Monte Carlo programs via interfaces.
Full-size table
  相似文献   
82.
Fixed-priority scheduling with deferred preemption (FPDS) has been proposed in the literature as a viable alternative to fixed-priority pre-emptive scheduling (FPPS), that obviates the need for non-trivial resource access protocols and reduces the cost of arbitrary preemptions. This paper shows that existing worst-case response time analysis of hard real-time tasks under FPDS, arbitrary phasing and relative deadlines at most equal to periods is pessimistic and/or optimistic. The same problem also arises for fixed-priority non-pre-emptive scheduling (FPNS), being a special case of FPDS. This paper provides a revised analysis, resolving the problems with the existing approaches. The analysis is based on known concepts of critical instant and busy period for FPPS. To accommodate for our scheduling model for FPDS, we need to slightly modify existing definitions of these concepts. The analysis assumes a continuous scheduling model, which is based on a partitioning of the timeline in a set of non-empty, right semi-open intervals. It is shown that the critical instant, longest busy period, and worst-case response time for a task are suprema rather than maxima for all tasks, except for the lowest priority task. Hence, that instant, period, and response time cannot be assumed for any task, except for the lowest priority task. Moreover, it is shown that the analysis is not uniform for all tasks, i.e. the analysis for the lowest priority task differs from the analysis of the other tasks. These anomalies for the lowest priority task are an immediate consequence of the fact that only the lowest priority task cannot be blocked. To build on earlier work, the worst-case response time analysis for FPDS is expressed in terms of known worst-case analysis results for FPPS. The paper includes pessimistic variants of the analysis, which are uniform for all tasks, illustrates the revised analysis for an advanced model for FPDS, where tasks are structured as flow graphs of subjobs rather than sequences, and shows that our analysis is sustainable.  相似文献   
83.
General Adaptive Neighborhood Choquet Image Filtering   总被引:1,自引:0,他引:1  
A novel framework entitled General Adaptive Neighborhood Image Processing (GANIP) has been recently introduced in order to propose an original image representation and mathematical structure for adaptive image processing and analysis. The central idea is based on the key notion of adaptivity which is simultaneously associated with the analyzing scales, the spatial structures and the intensity values of the image to be addressed. In this paper, the GANIP framework is briefly exposed and particularly studied in the context of Choquet filtering (using fuzzy measures), which generalizes a large class of image filters. The resulting spatially-adaptive operators are studied with respect to the general GANIP framework and illustrated in both the biomedical and materials application areas. In addition, the proposed GAN-based filters are practically applied and compared to several other denoising methods through experiments on image restoration, showing a high performance of the GAN-based Choquet filters.
Jean-Charles PinoliEmail:
  相似文献   
84.
The article explores approaches to discourses concerning age, with different agendas and national contexts. The Dialogue Seminar Method is introduced, as a means of facilitating reflection and access to tacit knowledge. Democratic dialogue requires orchestration, and enables horizontal communication and collective reflection.
Johan BerglundEmail:
  相似文献   
85.
In computational aero-acoustics, large-eddy simulations (LES) or direct numerical simulations (DNS) are often employed for flow computations in the source region. As part of the numerical implementation or required modeling, explicit spatial filters are frequently employed. For instance, in LES spatial filters are employed in the formulation of various subgrid-scale (SGS) models such as the dynamic model or the variational multi-scale (VMS) Smagorinsky model; both in LES or DNS, spatial high-pass filters are often used to remove undesired grid-to-grid oscillations. Though these type of spatial filters adhere to local accuracy requirements, in practice, they often destroy global conservation properties in the presence of non-periodic boundaries conditions. This leads to the incorrect prediction of the flow properties near hard boundaries, such as walls. In the current work, we present globally conservative high-order accurate filters, which combine traditional filters at the internal points with one-sided conservative filters near the wall boundary. We test these filters to remove grid-to-grid oscillations both in a channel-flow case and in 2D cavity flow. We find that the use of a non-conservative filter leads to erroneous predictions of the skin friction in channel flows up to 30%. In the cavity-flow simulations, the use of non-conservative filters to remove grid-to-grid oscillations leads to important shifts in the Strouhal number of the dominant mode, and a change of the flow pattern inside the cavity. In all cases, the use of conservative high-order filter formulations to remove grid-to-grid oscillations lead to very satisfactory results. Finally, in our channel-flow test case, we also illustrate the importance of using conservative filters for the formulation of the VMS Smagorinsky model.  相似文献   
86.
Crop identification on specific parcels and the assessment of soil management practices are important for agro-ecological studies, greenhouse gas modeling, and agrarian policy development. Traditional pixel-based analysis of remotely sensed data results in inaccurate identification of some crops due to pixel heterogeneity, mixed pixels, spectral similarity, and crop pattern variability. These problems can be overcome using object-based image analysis (OBIA) techniques, which incorporate new spectral, textural and hierarchical features after segmentation of imagery. We combined OBIA and decision tree (DT) algorithms to develop a methodology, named Object-based Crop Identification and Mapping (OCIM), for a multi-seasonal assessment of a large number of crop types and field status.In our approach, we explored several vegetation indices (VIs) and textural features derived from visible, near-infrared and short-wave infrared (SWIR) bands of ASTER satellite scenes collected during three distinct growing-season periods (mid-spring, early-summer and late-summer). OCIM was developed for 13 major crops cultivated in the agricultural area of Yolo County in California, USA. The model design was built in four different scenarios (combinations of three or two periods) by using two independent training and validation datasets and the best DTs resulted in an error rate of 9% for the three-period model and between 12 and 16% for the two-period models. Next, the selected DT was used for the thematic classification of the entire cropland area and mapping was then evaluated applying the confusion matrix method to the independent testing dataset that reported 79% overall accuracy. OCIM detected intra-class variations in most crops attributed to variability from local crop calendars, tree-orchard structures and land management operations. Spectral variables (based on VIs) contributed around 90% to the models, although textural variables were necessary to discriminate between most of the permanent crop-fields (orchards, vineyard, alfalfa and meadow). Features extracted from late-summer imagery contributed around 60% in classification model development, whereas mid-spring and early-summer imagery contributed around 30 and 10%, respectively. The Normalized Difference Vegetation Index (NDVI) was used to identify the main groups of crops based on the presence and vigor of green vegetation within the fields, contributing around 50% to the models. In addition, other VIs based on SWIR bands were also crucial to crop identification because of their potential to detect field properties like moisture, vegetation vigor, non-photosynthetic vegetation and bare soil. The OCIM method was built using interpretable rules based on physical properties of the crops studied and it was successful for object-based feature selection and crop identification.  相似文献   
87.
Dagstuhl seminar no. 10102 on discrete event logistic systems recognized a network of persistent models to be a “Grand Challenge.” Such on-line model network will offer an infrastructure that facilitates the management of logistic operations. This ambition to create a network of persistent models implies a radical shift for model design activities as the objective is an infrastructure rather than application-specific solutions. In particular, model developers can no longer assume that they know what their model will be used for. It is no longer possible to design for the expected.This paper presents insights in model development and design in the absence of precise knowledge concerning a model's usage. Basically, model developers may solely rely on the presence of the real-world counterpart mirrored by their model and a general idea about the nature of the application (e.g. coordination of logistic operations). When the invariants of their real-world counterpart suffice for models to be valid, these models become reusable and integrate-able. As these models remain valid under a wide range of situations, they become multi-purpose and durable resources rather than single-purpose short-lived components or legacy, which is even worse.Moreover and more specifically, the paper describes how to build models that allow their users to generate predictions in unexpected situations and atypical conditions. Referring to previous work, the paper concisely discusses how these predictions can be generated starting from the models. This prediction-generating technology is currently being transferred into an industrial MES.  相似文献   
88.
An important objective of data mining is the development of predictive models. Based on a number of observations, a model is constructed that allows the analysts to provide classifications or predictions for new observations. Currently, most research focuses on improving the accuracy or precision of these models and comparatively little research has been undertaken to increase their comprehensibility to the analyst or end-user. This is mainly due to the subjective nature of ‘comprehensibility’, which depends on many factors outside the model, such as the user's experience and his/her prior knowledge. Despite this influence of the observer, some representation formats are generally considered to be more easily interpretable than others. In this paper, an empirical study is presented which investigates the suitability of a number of alternative representation formats for classification when interpretability is a key requirement. The formats under consideration are decision tables, (binary) decision trees, propositional rules, and oblique rules. An end-user experiment was designed to test the accuracy, response time, and answer confidence for a set of problem-solving tasks involving the former representations. Analysis of the results reveals that decision tables perform significantly better on all three criteria, while post-test voting also reveals a clear preference of users for decision tables in terms of ease of use.  相似文献   
89.
IntroductionSeveral statistical methods of assessing seasonal variation are available. Brookhart and Rothman [3] proposed a second-order moment-based estimator based on the geometrical model derived by Edwards [1], and reported that this estimator is superior in estimating the peak-to-trough ratio of seasonal variation compared with Edwards’ estimator with respect to bias and mean squared error. Alternatively, seasonal variation may be modelled using a Poisson regression model, which provides flexibility in modelling the pattern of seasonal variation and adjustments for covariates.MethodBased on a Monte Carlo simulation study three estimators, one based on the geometrical model, and two based on log-linear Poisson regression models, were evaluated in regards to bias and standard deviation (SD). We evaluated the estimators on data simulated according to schemes varying in seasonal variation and presence of a secular trend. All methods and analyses in this paper are available in the R package Peak2Trough [13].ResultsApplying a Poisson regression model resulted in lower absolute bias and SD for data simulated according to the corresponding model assumptions. Poisson regression models had lower bias and SD for data simulated to deviate from the corresponding model assumptions than the geometrical model.ConclusionThis simulation study encourages the use of Poisson regression models in estimating the peak-to-trough ratio of seasonal variation as opposed to the geometrical model.  相似文献   
90.
3D model alignment is an important step for applications such as 3D model retrieval and 3D model recognition. In this paper, we propose a novel Minimum Projection Area-based (MPA) alignment method for pose normalization. Our method finds three principal axes to align a model: the first principal axis gives the minimum projection area when we perform an orthographic projection of the model in the direction parallel to this axis, the second axis is perpendicular to the first axis and gives the minimum projection area, and the third axis is the cross product of the first two axes. We devise an optimization method based on Particle Swarm Optimization to efficiently find the axis with minimum projection area. For application in retrieval, we further perform axis ordering and orientation in order to align similar models in similar poses. We have tested MPA on several standard databases which include rigid/non-rigid and open/watertight models. Experimental results demonstrate that MPA has a good performance in finding alignment axes which are parallel to the ideal canonical coordinate frame of models and aligning similar models in similar poses under different conditions such as model variations, noise, and initial poses. In addition, it achieves a better 3D model retrieval performance than several commonly used approaches such as CPCA, NPCA, and PCA.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号