首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3593篇
  免费   286篇
  国内免费   5篇
电工技术   26篇
化学工业   991篇
金属工艺   60篇
机械仪表   107篇
建筑科学   127篇
矿业工程   13篇
能源动力   120篇
轻工业   720篇
水利工程   25篇
石油天然气   25篇
无线电   254篇
一般工业技术   527篇
冶金工业   208篇
原子能技术   27篇
自动化技术   654篇
  2024年   9篇
  2023年   50篇
  2022年   145篇
  2021年   185篇
  2020年   116篇
  2019年   136篇
  2018年   152篇
  2017年   140篇
  2016年   172篇
  2015年   126篇
  2014年   201篇
  2013年   303篇
  2012年   254篇
  2011年   280篇
  2010年   202篇
  2009年   191篇
  2008年   173篇
  2007年   172篇
  2006年   121篇
  2005年   91篇
  2004年   91篇
  2003年   65篇
  2002年   77篇
  2001年   45篇
  2000年   36篇
  1999年   37篇
  1998年   51篇
  1997年   45篇
  1996年   24篇
  1995年   36篇
  1994年   21篇
  1993年   18篇
  1992年   12篇
  1991年   11篇
  1990年   10篇
  1989年   12篇
  1988年   10篇
  1987年   8篇
  1986年   10篇
  1985年   11篇
  1984年   7篇
  1983年   3篇
  1982年   5篇
  1981年   4篇
  1979年   3篇
  1977年   2篇
  1976年   3篇
  1970年   1篇
  1969年   1篇
  1968年   1篇
排序方式: 共有3884条查询结果,搜索用时 15 毫秒
101.
Cloud technologies can provide elasticity to real-time audio and video (A/V) collaboration applications. However, cloud-based collaboration solutions generally operate on a best-effort basis, with neither delivery nor quality guarantees, and high-quality business focused solutions rely on dedicated infrastructure and hardware-based components. This article describes our 2-year of research in the EMD project, which targets to migrate a hardware-based and business focused A/V collaboration solution to a software-based platform hosted in the cloud, providing higher levels of elasticity and reliability. Our focus during this period was an educational collaboration scenario with teachers and students (locally present in the classroom or remotely following the classes). A model of collaboration streaming (e.g. network topology, codecs, stream, streaming workflow, software components) is defined as base for software deployment and preemptive VM allocation techniques. These heuristics are evaluated using a version of the CloudSim simulator extended to generate and simulate realistic collaboration scenarios, to manage network congestion and to monitor a.o. cost and session delay metrics. Our results show that the algorithms reduce costs when compared to previously designed approaches, having an effectiveness of 99% in meeting A/V collaboration setup deadlines, which is a stringent requirement for this collaboration application.  相似文献   
102.
Simultaneous aligning and smoothing of surface triangulations   总被引:1,自引:0,他引:1  
In this work we develop a procedure to deform a given surface triangulation to obtain its alignment with interior curves. These curves are defined by splines in a parametric space and, subsequently, mapped to the surface triangulation. We have restricted our study to orthogonal mapping, so we require the curves to be included in a patch of the surface that can be orthogonally projected onto a plane (our parametric space). For example, the curves can represent interfaces between different materials or boundary conditions, internal boundaries or feature lines. Another setting in which this procedure can be used is the adaption of a reference mesh to changing curves in the course of an evolutionary process. Specifically, we propose a new method that moves the nodes of the mesh, maintaining its topology, in order to achieve two objectives simultaneously: the piecewise approximation of the curves by edges of the surface triangulation and the optimization of the resulting mesh. We will designate this procedure as projecting/smoothing method and it is based on the smoothing technique that we have introduced for surface triangulations in previous works. The mesh quality improvement is obtained by an iterative process where each free node is moved to a new position that minimizes a certain objective function. The minimization process is done on the parametric plane attending to the surface piece-wise approximation and to an algebraic quality measure (mean ratio) of the set of triangles that are connected to the free node. So, the 3-D local projecting/smoothing problem is reduced to a 2-D optimization problem. Several applications of this method are presented.  相似文献   
103.
This work presents a multi‐agent system for knowledge‐based high‐level event composition, which interprets activities, behaviour and situations semantically in a scenario with multi‐sensory monitoring. A perception agent (plurisensory agent and visual agent)‐based structure is presented. The agents process the sensor information and identify (agent decision system) significant changes in the monitored signals, which they send as simple events to the composition agent that searches for and identifies pre‐defined patterns as higher‐level semantic composed events. The structure has a methodology and a set of tools that facilitate its development and application to different fields without having to start from scratch. This creates an environment to develop knowledge‐based systems generally for event composition. The application task of our work is surveillance, and event composition/inference examples are shown which characterize an alarming situation in the scene and resolve identification and tracking problems of people in the scenario being monitored.  相似文献   
104.
The increasing volume of eGovernment‐related services is demanding new approaches for service integration and interoperability in this domain. Semantic web (SW) technologies and applications can leverage the potential of eGovernment service integration and discovery, thus tackling the problems of semantic heterogeneity characterizing eGovernment information sources and the different levels of interoperability. eGovernment services will therefore be semantically described in the foreseeable future. In an environment with semantically annotated services, software agents are essential as the entities responsible for exploiting the semantic content in order to automate some tasks, and so enhance the user's experience. In this paper, we present a framework that provides a seamless integration of semantic web services and intelligent agents technologies by making use of ontologies to facilitate their interoperation. The proposed framework can assist in the development of powerful and flexible distributed systems in complex, dynamic, heterogeneous, unpredictable and open environments. Our approach is backed up by a proof‐of‐concept implementation, where the breakthrough of integrating disparate eGovernment services has been tested.  相似文献   
105.
The present work is intended to address two of the major difficulties that can be found when tackling the estimation of the local orientation of the data in a scene, a task which is usually accomplished by means of the computation of the structure tensor-based directional field. On one hand, the orientation information only exists in the non-homogeneous regions of the dataset, while it is zero in the areas where the gradient (i.e. the first-order intensity variation) remains constant. Due to this lack of information, there are many cases in which the overall shape of the represented objects cannot be precisely inferred from the directional field. On the other hand, the orientation estimation is highly dependent on the particular choice of the averaging window used for its computation (since a collection of neighboring gradient vectors is needed to obtain a dominant orientation), typically resulting in vector fields which vary from very irregular (thus yielding a noisy estimation) to very uniform (but at the expense of a loss of angular resolution). The proposed solution to both drawbacks is the regularization of the directional field; this process extends smoothly the previously computed vectors to the whole dataset while preserving the angular information of relevant structures. With this purpose, the paper introduces a suitable mathematical framework and deals with the d-dimensional variational formulation which is derived from it. The proposed formulation is finally translated into the frequency domain in order to obtain an increase of insight on the regularization problem, which can be understood as a low-pass filtering of the directional field. The frequency domain point of view also allows for an efficient implementation of the resulting iterative algorithm. Simulation experiments involving datasets of different dimensionality prove the validity of the theoretical approach.  相似文献   
106.
The operation principle of the mass-controlled capillary viscometer is presented for a Newtonian liquid. The derived equation for the temporal changes of the mass in a liquid column draining under gravity through a discharge capillary tube accounts self-consistently for the inertial convective term associated with the acceleration effect. The viscosity of water measured at different temperatures using the new approach is in good agreement with literature data.  相似文献   
107.
Models wagging the dog: are circuits constructed with disparate parameters?   总被引:1,自引:0,他引:1  
In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken.  相似文献   
108.
This paper presents a scheme and its Field Programmable Gate Array (FPGA) implementation for a system based on combining the bi-dimensional discrete wavelet transformation (2D-DWT) and vector quantization (VQ) for image compression. The 2D-DWT works in a non-separable fashion using a parallel filter structure with distributed control to compute two resolution levels. The wavelet coefficients of the higher frequency sub-bands are vector quantized using multi-resolution codebook and those of the lower frequency sub-band at level two are scalar quantized and entropy encoded. VQ is carried out by self organizing feature map (SOFM) neural nets working at the recall phase. Codebooks are quickly generated off-line using the same nets functioning at the training phase. The complete system, including the 2D-DWT, the multi-resolution codebook VQ, and the statistical encoder, was implemented on a Xilinx Virtex 4 FPGA and is capable of performing real-time compression for digital video when dealing with grayscale 512 × 512 pixels images. It offers high compression quality (PSNR values around 35 dB) and acceptable compression rate values (0.62 bpp).
Javier Diaz-CarmonaEmail:
  相似文献   
109.
The effluent from the anaerobic biological treatment of coffee wet processing wastewater (CWPW) contains a non-biodegradable compound that must be treated before it is discharged into a water source. In this paper, the wet hydrogen peroxide catalytic oxidation (WHPCO) process using Al-Ce-Fe-PILC catalysts was researched as a post-treatment system for CWPW and tested in a semi-batch reactor at atmospheric pressure and 25 °C. The Al-Ce-Fe-PILC achieved a high conversion rate of total phenolic compounds (70%) and mineralization to CO(2) (50%) after 5 h reaction time. The chemical oxygen demand (COD) of coffee processing wastewater after wet hydrogen peroxide catalytic oxidation was reduced in 66%. The combination of the two treatment methods, biological (developed by Cenicafé) and catalytic oxidation with Al-Ce-Fe-PILC, achieved a 97% reduction of COD in CWPW. Therefore, the WHPCO using Al-Ce-Fe-PILC catalysts is a viable alternative for the post-treatment of coffee processing wastewater.  相似文献   
110.
Markets liquidity is an issue of very high concern in financial risk management. In a perfect liquid market the option pricing model becomes the well-known linear Black–Scholes problem. Nonlinear models appear when transaction costs or illiquid market effects are taken into account. This paper deals with the numerical analysis of nonlinear Black–Scholes equations modeling illiquid markets when price impact in the underlying asset market affects the replication of a European contingent claim. Numerical analysis of a nonlinear model is necessary because disregarded computations may waste a good mathematical model. In this paper we propose a finite-difference numerical scheme that guarantees positivity of the solution as well as stability and consistency.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号