首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3720篇
  免费   300篇
  国内免费   9篇
电工技术   40篇
综合类   2篇
化学工业   942篇
金属工艺   60篇
机械仪表   107篇
建筑科学   101篇
矿业工程   6篇
能源动力   192篇
轻工业   533篇
水利工程   34篇
石油天然气   14篇
无线电   264篇
一般工业技术   686篇
冶金工业   119篇
原子能技术   31篇
自动化技术   898篇
  2024年   4篇
  2023年   51篇
  2022年   47篇
  2021年   194篇
  2020年   140篇
  2019年   137篇
  2018年   158篇
  2017年   152篇
  2016年   180篇
  2015年   127篇
  2014年   195篇
  2013年   303篇
  2012年   253篇
  2011年   330篇
  2010年   213篇
  2009年   228篇
  2008年   188篇
  2007年   173篇
  2006年   127篇
  2005年   128篇
  2004年   78篇
  2003年   66篇
  2002年   69篇
  2001年   51篇
  2000年   40篇
  1999年   32篇
  1998年   38篇
  1997年   26篇
  1996年   39篇
  1995年   27篇
  1994年   20篇
  1993年   11篇
  1992年   13篇
  1991年   22篇
  1990年   10篇
  1989年   11篇
  1988年   12篇
  1987年   8篇
  1986年   13篇
  1985年   14篇
  1984年   17篇
  1983年   13篇
  1982年   12篇
  1981年   8篇
  1980年   8篇
  1979年   10篇
  1977年   6篇
  1976年   4篇
  1975年   5篇
  1973年   7篇
排序方式: 共有4029条查询结果,搜索用时 15 毫秒
91.
When performing a classification task, we may find some data-sets with a different class distribution among their patterns. This problem is known as classification with imbalanced data-sets and it appears in many real application areas. For this reason, it has recently become a relevant topic in the area of Machine Learning.The aim of this work is to improve the behaviour of fuzzy rule based classification systems (FRBCSs) in the framework of imbalanced data-sets by means of a tuning step. Specifically, we adapt the 2-tuples based genetic tuning approach to classification problems showing the good synergy between this method and some FRBCSs.Our empirical results show that the 2-tuples based genetic tuning increases the performance of FRBCSs in all types of imbalanced data. Furthermore, when the initial Rule Base, built by a fuzzy rule learning methodology, obtains a good behaviour in terms of accuracy, we achieve a higher improvement in performance for the whole model when applying the genetic 2-tuples post-processing step. This enhancement is also obtained in the case of cooperation with a preprocessing stage, proving the necessity of rebalancing the training set before the learning phase when dealing with imbalanced data.  相似文献   
92.
Functional programs often combine separate parts using intermediate data structures for communicating results. Programs so defined are modular, easier to understand and maintain, but suffer from inefficiencies due to the generation of those gluing data structures. To eliminate such redundant data structures, some program transformation techniques have been proposed. One such technique is shortcut fusion, and has been studied in the context of both pure and monadic functional programs. In this paper, we study several shortcut fusion extensions, so that, alternatively, circular or higher-order programs are derived. These extensions are also provided for effect-free programs and monadic ones. Our work results in a set of generic calculation rules, that are widely applicable, and whose correctness is formally established.  相似文献   
93.
In the classification framework there are problems in which the number of examples per class is not equitably distributed, formerly known as imbalanced data sets. This situation is a handicap when trying to identify the minority classes, as the learning algorithms are not usually adapted to such characteristics. An usual approach to deal with the problem of imbalanced data sets is the use of a preprocessing step. In this paper we analyze the usefulness of the data complexity measures in order to evaluate the behavior of undersampling and oversampling methods. Two classical learning methods, C4.5 and PART, are considered over a wide range of imbalanced data sets built from real data. Specifically, oversampling techniques and an evolutionary undersampling one have been selected for the study. We extract behavior patterns from the results in the data complexity space defined by the measures, coding them as intervals. Then, we derive rules from the intervals that describe both good or bad behaviors of C4.5 and PART for the different preprocessing approaches, thus obtaining a complete characterization of the data sets and the differences between the oversampling and undersampling results.  相似文献   
94.
A non-intrusive estimation for high-quality Internet TV services   总被引:1,自引:0,他引:1  
This paper presents a non-intrusive estimator for Internet TV services based on streaming technology. Analyzing the video packets received by the client application, the estimator is capable of selecting the most suitable encoding bitrate for the available bandwidth in the end-to-end path. The estimator has been integrated in a real client/server architecture and evaluated with different network traffic situations. The results of the performed evaluation have revealed the stability and adaptation speed as the best qualities of our proposal.  相似文献   
95.
In this paper we generalize the Continuous Adversarial Queuing Theory (CAQT) model (Blesa et al. in MFCS, Lecture Notes in Computer Science, vol. 3618, pp. 144–155, 2005) by considering the possibility that the router clocks in the network are not synchronized. We name the new model Non Synchronized CAQT (NSCAQT). Clearly, this new extension to the model only affects those scheduling policies that use some form of timing. In a first approach we consider the case in which although not synchronized, all clocks run at the same speed, maintaining constant differences. In this case we show that all universally stable policies in CAQT that use the injection time and the remaining path to schedule packets remain universally stable. These policies include, for instance, Shortest in System (SIS) and Longest in System (LIS). Then, we study the case in which clock differences can vary over time, but the maximum difference is bounded. In this model we show the universal stability of two families of policies related to SIS and LIS respectively (the priority of a packet in these policies depends on the arrival time and a function of the path traversed). The bounds we obtain in this case depend on the maximum difference between clocks. This is a necessary requirement, since we also show that LIS is not universally stable in systems without bounded clock difference. We then present a new policy that we call Longest in Queues (LIQ), which gives priority to the packet that has been waiting the longest in edge queues. This policy is universally stable and, if clocks maintain constant differences, the bounds we prove do not depend on them. To finish, we provide with simulation results that compare the behavior of some of these policies in a network with stochastic injection of packets.  相似文献   
96.
97.
We provide Stochastic Concurrent Constraint Programming (sCCP), a stochastic process algebra based on CCP, with a semantics in terms of hybrid automata. We associate with each sCCP program both a stochastic and a non-deterministic hybrid automaton. Then, we compare such automata with the standard stochastic semantics (given by a Continuous Time Markov Chain) and the one based on ordinary differential equations, obtained by a fluid-flow approximation technique. We discuss in detail two case studies: Repressilator and the Circadian Clock, with particular regard to the robustness exhibited by the different semantic models and to the effect of discreteness in dynamical evolution of such systems.  相似文献   
98.
In this work we present a general (mono and multiobjective) optimization framework for the technological improvement of biochemical systems. The starting point of the method is a mathematical model in ordinary differential equations (ODEs) of the investigated system, based on qualitative biological knowledge and quantitative experimental data. In the method we take advantage of the special structural features of a family of ODEs called power-law models to reduce the computational complexity of the optimization program. In this way, the genetic manipulation of a biochemical system to meet a certain biotechnological goal can be expressed as an optimization program with some desired properties such as linearity or convexity.The general method of optimization is presented and discussed in its linear and geometric programming versions. We furthermore illustrate the use of the method by several real case studies. We conclude that the technological improvement of microorganisms can be afforded using the combination of mathematical modelling and optimization. The systematic nature of this approach facilitates the redesign of biochemical systems and makes this a predictive exercise rather than a trial-and-error procedure.  相似文献   
99.
In this work, we propose new techniques to analyze the behavior, the performance, and specially the scalability of High Performance Computing (in short, HPC) applications on different computing architectures. Our final objective is to test applications using a wide range of architectures (real or merely designed) and scaling it to any number of nodes or components. This paper presents a new simulation framework, called SIMCAN, for HPC architectures. The main characteristic of the proposed simulation framework is the ability to be configured for simulating a wide range of possible architectures that involve any number of components. SIMCAN is developed to simulate complete HPC architectures, but putting special emphasis on the storage and network subsystems. The SIMCAN framework can handle complete components (nodes, racks, switches, routers, etc.), but also key elements of the storage and network subsystems (disks, caches, sockets, file systems, schedulers, etc.). We also propose several methods to implement the behavior of HPC applications. Each method has its own advantages and drawbacks. In order to evaluate the possibilities and the accuracy of the SIMCAN framework, we have tested it by executing a HPC application called BIPS3D on a hardware-based computing cluster and on a modeled environment that represent the real cluster. We also checked the scalability of the application using this kind of architecture by simulating the same application with an increased number of computing nodes.  相似文献   
100.
This work presents a methodology to formally model and to build three-dimensional interaction tasks in virtual environments using three different tools: Petri Nets, the Interaction Technique Decomposition taxonomy, and Object-Oriented techniques. User operations in the virtual environment are represented as Petri Net nodes; these nodes, when linked, represent the interaction process stages. In our methodology, places represent all the states an application can reach, transitions define the conditions to start an action, and tokens embody the data manipulated by the application. As a result of this modeling process we automatically generate the core of the application's source code. We also use a Petri Net execution library to run the application code. In order to facilitate the application modeling, we have adapted Dia, a well-known graphical diagram editor, to support Petri Nets creation and code generation. The integration of these approaches results in a modular application, based on Petri Nets formalism that allows for the specification of an interaction task and for the reuse of developed blocks in new virtual environment projects.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号