首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2816篇
  免费   184篇
  国内免费   1篇
电工技术   37篇
化学工业   785篇
金属工艺   50篇
机械仪表   128篇
建筑科学   133篇
矿业工程   8篇
能源动力   113篇
轻工业   447篇
水利工程   17篇
石油天然气   12篇
无线电   187篇
一般工业技术   372篇
冶金工业   100篇
原子能技术   11篇
自动化技术   601篇
  2024年   11篇
  2023年   38篇
  2022年   73篇
  2021年   152篇
  2020年   112篇
  2019年   132篇
  2018年   112篇
  2017年   116篇
  2016年   125篇
  2015年   100篇
  2014年   142篇
  2013年   246篇
  2012年   180篇
  2011年   216篇
  2010年   175篇
  2009年   162篇
  2008年   121篇
  2007年   128篇
  2006年   109篇
  2005年   79篇
  2004年   64篇
  2003年   53篇
  2002年   55篇
  2001年   32篇
  2000年   34篇
  1999年   25篇
  1998年   21篇
  1997年   26篇
  1996年   20篇
  1995年   16篇
  1994年   14篇
  1993年   16篇
  1992年   5篇
  1991年   6篇
  1990年   5篇
  1989年   6篇
  1988年   6篇
  1987年   5篇
  1986年   8篇
  1985年   7篇
  1984年   9篇
  1983年   13篇
  1982年   5篇
  1981年   3篇
  1978年   5篇
  1977年   3篇
  1976年   2篇
  1975年   3篇
  1970年   2篇
  1968年   1篇
排序方式: 共有3001条查询结果,搜索用时 15 毫秒
31.
I am presenting a new design for the head sub-system of the humanoid robot iCub. ICub is a humanoid robot platform that has been conceived as a result of the development of the European Project RobotCub. Mechanical problems of the current head sub-system of iCub robot have been studied in order to figure out improvements for a new design. The feasibility of the new design has been checked through dynamic simulations of the head operations. The successful results have shown plots with curve evolutions of smooth shapes and suitable values of important kinematic and dynamic parameters of the head structure.  相似文献   
32.
The alcohol lost by evaporation during the bioethanol fermentation process may be collected and recovered using an absorption column. This equipment is also used in the carbonic gas treatment, a by-product from the sugar cane fermentation. In the present work, the development of nonlinear feedforward–feedback controllers, based on neural network inverse models, was proposed and tested to manipulate the absorbent flow rates. The control purposes are: to keep low ethanol concentration in the effluent gas phase from the first absorption column (ethanol recovery column); and to reduce the residual water concentration in the CO2 gas effluent from the second tower (CO2 treatment column).Based on simulation studies, the neural network (ANN) controller performance was compared with the conventional PID control scheme application. The best ANN architecture was set up according to the Foresse and Hagan (1997) criterion, while the PID parameters were found from the well-known Cohen–Coon Equations and trial-and-error fine tuning.Initially, performances were evaluated for the system without concentration measurement uncertainties. From these tests, the ANN controller presented the smallest response time and overshoot for regulator and servo problems. Three uncertainty levels were applied afterwards: 5%, 10%, and 15%.The ANN controller outperformed the PID for all uncertainty levels tested for the ethanol recovery column. For the CO2 treatment column, the ANN controller proceeded successfully under uncertainties of 5% and 10%, while the PID did not deal properly with uncertainties above 5%. The statistical F-test, besides the ITAE, ISE, and IAE performance criteria, were calculated for both controllers applications and then compared. They proved the superiority of the ANN control scheme.Using appropriately the proposed well-controlled absorption columns increases the efficiency of the bioethanol production plant and can also provide carbon credits by avoiding CO2 emission into the atmosphere.  相似文献   
33.
ContextModel-Driven Development (MDD) is an alternative approach for information systems development. The basic underlying concept of this approach is the definition of abstract models that can be transformed to obtain models near implementation. One fairly widespread proposal in this sphere is that of Model Driven Architecture (MDA). Business process models are abstract models which additionally contain key information about the tasks that are being carried out to achieve the company’s goals, and two notations currently exist for modelling business processes: the Unified Modelling Language (UML), through activity diagrams, and the Business Process Modelling Notation (BPMN).ObjectiveOur research is particularly focused on security requirements, in such a way that security is modelled along with the other aspects that are included in a business process. To this end, in earlier works we have defined a metamodel called secure business process (SBP), which may assist in the process of developing software as a source of highly valuable requirements (including very abstract security requirements), which are transformed into models with a lower abstraction level, such as analysis class diagrams and use case diagrams through the approach presented in this paper.MethodWe have defined all the transformation rules necessary to obtain analysis class diagrams and use case diagrams from SBP, and refined them through the characteristic iterative process of the action-research method.ResultsWe have obtained a set of rules and a checklist that make it possible to automatically obtain a set of UML analysis classes and use cases, starting from SBP models. Our approach has additionally been applied in a real environment in the area of the payment of electrical energy consumption.ConclusionsThe application of our proposal shows that our semi-automatic process can be used to obtain a set of useful artifacts for software development processes.  相似文献   
34.
This paper presents a solution to the problem of unsupervised classification of dynamic obstacles in urban environments. A track‐based model is introduced for the integration of two‐dimensional laser and vision information that provides a robust spatiotemporal synthesis of the sensed moving obstacles and forms the basis for suitable algorithms to perform unsupervised classification by clustering. This work presents various contributions in order to achieve accurate and efficient performance, initially using laser tracks for classification and then incorporating visual tracks to the model. A procedure is proposed for accurate unsupervised classification of dynamic obstacles using a laser stamp representation of the tracks. Laser data are then integrated with visual information through a single‐instance visual stamp representation, which is finally extended using a multiple‐instance framework to robustly deal with challenges associated with perception in real‐world scenarios. The proposed algorithms are extensively validated with a simulated environment. Experiments with a research vehicle in an urban environment demonstrate the performance of the approach with real data. The experimental results reach an accuracy of more than 92% for obstacle classification, finding the clusters that correspond to the main obstacle classes in the data. © 2010 Wiley Periodicals, Inc.  相似文献   
35.
The objective of this paper is to present an overall approach to forecasting the future position of the moving objects of an image sequence after processing the images previous to it. The proposed method makes use of classical techniques such as optical flow to extract objects’ trajectories and velocities, and autoregressive algorithms to build the predictive model. Our method can be used in a variety of applications, where videos with stationary cameras are used, moving objects are not deformed and change their position with time. One of these applications is traffic control, which is used in this paper as a case study with different meteorological conditions to compare with.
Marta Zorrilla (Corresponding author)Email:
  相似文献   
36.
Deduplication is the task of identifying the entities in a data set which refer to the same real world object. Over the last decades, this problem has been largely investigated and many techniques have been proposed to improve the efficiency and effectiveness of the deduplication algorithms. As data sets become larger, such algorithms may generate critical bottlenecks regarding memory usage and execution time. In this context, cloud computing environments have been used for scaling out data quality algorithms. In this paper, we investigate the efficacy of different machine learning techniques for scaling out virtual clusters for the execution of deduplication algorithms under predefined time restrictions. We also propose specific heuristics (Best Performing Allocation, Probabilistic Best Performing Allocation, Tunable Allocation, Adaptive Allocation and Sliced Training Data) which, together with the machine learning techniques, are able to tune the virtual cluster estimations as demands fluctuate over time. The experiments we have carried out using multiple scale data sets have provided many insights regarding the adequacy of the considered machine learning algorithms and proposed heuristics for tackling cloud computing provisioning.  相似文献   
37.
In this paper we consider the problem of identifying the most influential (or central) group of nodes (of some predefined size) in a network. Such a group has the largest value of betweenness centrality or one of its variants, for example, the length-scaled or the bounded-distance betweenness centralities. We demonstrate that this problem can be modelled as a mixed integer program (MIP) that can be solved for reasonably sized network instances using off-the-shelf MIP solvers. We also discuss interesting relations between the group betweenness and the bounded-distance betweenness centrality concepts. In particular, we exploit these relations in an algorithmic scheme to identify approximate solutions for the original problem of identifying the most central group of nodes. Furthermore, we generalize our approach for identification of not only the most central groups of nodes, but also central groups of graph elements that consists of either nodes or edges exclusively, or their combination according to some pre-specified criteria. If necessary, additional cohesiveness properties can also be enforced, for example, the targeted group should form a clique or a κ-club. Finally, we conduct extensive computational experiments with different types of real-life and synthetic network instances to show the effectiveness and flexibility of the proposed framework. Even more importantly, our experiments reveal some interesting insights into the properties of influential groups of graph elements modelled using the maximum betweenness centrality concept or one of its variations.  相似文献   
38.
This paper presents an optofluidic device for cell discrimination with two independent interrogation regions. Pumping light is coupled to the device, and cell fluorescence is extracted from the two interrogation zones by using optical fibers embedded in the optofluidic chip. To test the reliability of this device, AU-565 cells—expressing EpCAM and HER2 receptors—and RAMOS cells were mixed in a controlled manner, confined inside a hydrodynamic focused flow in the microfluidic chip and detected individually so that they could be discriminated as positive (signal reception from fluorescently labeled antibodies from the AU-565 cells) or negative events (RAMOS cells). A correlation analysis of the two signals reduces the influence of noise on the overall data.  相似文献   
39.
Simultaneous aligning and smoothing of surface triangulations   总被引:1,自引:0,他引:1  
In this work we develop a procedure to deform a given surface triangulation to obtain its alignment with interior curves. These curves are defined by splines in a parametric space and, subsequently, mapped to the surface triangulation. We have restricted our study to orthogonal mapping, so we require the curves to be included in a patch of the surface that can be orthogonally projected onto a plane (our parametric space). For example, the curves can represent interfaces between different materials or boundary conditions, internal boundaries or feature lines. Another setting in which this procedure can be used is the adaption of a reference mesh to changing curves in the course of an evolutionary process. Specifically, we propose a new method that moves the nodes of the mesh, maintaining its topology, in order to achieve two objectives simultaneously: the piecewise approximation of the curves by edges of the surface triangulation and the optimization of the resulting mesh. We will designate this procedure as projecting/smoothing method and it is based on the smoothing technique that we have introduced for surface triangulations in previous works. The mesh quality improvement is obtained by an iterative process where each free node is moved to a new position that minimizes a certain objective function. The minimization process is done on the parametric plane attending to the surface piece-wise approximation and to an algebraic quality measure (mean ratio) of the set of triangles that are connected to the free node. So, the 3-D local projecting/smoothing problem is reduced to a 2-D optimization problem. Several applications of this method are presented.  相似文献   
40.
There has been an explosion in the types, availability and volume of data accessible in an information system, thanks to the World Wide Web (the Web) and related inter-networking technologies. In this environment, there is a critical need to replace or complement earlier database integration approaches and current browsing and keyword-based techniques with concept-based approaches. Ontologies are increasingly becoming accepted as an important part of any concept or semantics based solution, and there is increasing realization that any viable solution will need to support multiple ontologies that may be independently developed and managed. In particular, we consider the use of concepts from pre-existing real world domain ontologies for describing the content of the underlying data repositories. The most challenging issue in this approach is that of vocabulary sharing, which involves dealing with the use of different terms or concepts to describe similar information. In this paper, we describe the architecture, design and implementation of the OBSERVER system. Brokering across the domain ontologies is enabled by representing and utilizing interontology relationships such as (but not limited to) synonyms, hyponyms and hypernyms across terms in different ontologies. User queries are rewritten by using these relationships to obtain translations across ontologies. Well established metrics like precision and recall based on the extensions underlying the concepts are used to estimate the loss of information, if any.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号