首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   865篇
  免费   51篇
  国内免费   4篇
电工技术   11篇
综合类   2篇
化学工业   215篇
金属工艺   26篇
机械仪表   23篇
建筑科学   43篇
矿业工程   7篇
能源动力   32篇
轻工业   76篇
水利工程   10篇
石油天然气   5篇
无线电   96篇
一般工业技术   177篇
冶金工业   74篇
原子能技术   5篇
自动化技术   118篇
  2024年   3篇
  2023年   8篇
  2022年   17篇
  2021年   28篇
  2020年   26篇
  2019年   24篇
  2018年   27篇
  2017年   31篇
  2016年   29篇
  2015年   30篇
  2014年   32篇
  2013年   64篇
  2012年   46篇
  2011年   67篇
  2010年   39篇
  2009年   39篇
  2008年   35篇
  2007年   41篇
  2006年   34篇
  2005年   23篇
  2004年   15篇
  2003年   26篇
  2002年   17篇
  2001年   13篇
  2000年   7篇
  1999年   15篇
  1998年   19篇
  1997年   16篇
  1996年   12篇
  1995年   12篇
  1994年   11篇
  1993年   8篇
  1992年   7篇
  1991年   6篇
  1990年   9篇
  1989年   5篇
  1988年   5篇
  1987年   10篇
  1986年   4篇
  1985年   6篇
  1984年   3篇
  1983年   4篇
  1982年   4篇
  1980年   4篇
  1979年   5篇
  1976年   6篇
  1973年   4篇
  1972年   5篇
  1968年   2篇
  1966年   2篇
排序方式: 共有920条查询结果,搜索用时 15 毫秒
21.
We propose an approach for interactive 3D face editing based on deep generative models. Most of the current face modeling methods rely on linear methods and cannot express complex and non-linear deformations. In contrast to 3D morphable face models based on Principal Component Analysis (PCA), we introduce a novel architecture based on variational autoencoders. Our architecture has multiple encoders (one for each part of the face, such as the nose and mouth) which feed a single decoder. As a result, each sub-vector of the latent vector represents one part. We train our model with a novel loss function that further disentangles the space based on different parts of the face. The output of the network is a whole 3D face. Hence, unlike part-based PCA methods, our model learns to merge the parts intrinsically and does not require an additional merging process. To achieve interactive face modeling, we optimize for the latent variables given vertex positional constraints provided by a user. To avoid unwanted global changes elsewhere on the face, we only optimize the subset of the latent vector that corresponds to the part of the face being modified. Our editing optimization converges in less than a second. Our results show that the proposed approach supports a broader range of editing constraints and generates more realistic 3D faces.  相似文献   
22.
Full‐field identification methods are increasingly used to adequately identify constitutive parameters to describe the mechanical behavior of materials. This paper investigates the more recently introduced one‐step method of integrated digital image correlation (IDIC) with respect to the most commonly used two‐step method of finite element model updating (FEMU), which uses a subset‐based DIC algorithm. To make the comparison as objective as possible, both methods are implemented in the most equivalent manner and use the same FE model. Various virtual test cases are studied to assess the performance of both methods when subjected to different error sources: (1) systematic errors, (2) poor initial guesses for the constitutive parameters, (3) image noise, (4) constitutive model errors, and (5) experimental errors. Results show that, despite the mathematical similarity of both methods, IDIC produces less erroneous and more reliable results than FEMU, particularly for more challenging test cases exhibiting small displacements, complex kinematics, misalignment of the specimen, and image noise. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
23.

Abstract

Real-time finite-state systems may be specified in linear logic by means of linear implications between conjunctions of fixed finite length. In this setting, where time is treated as a dense linear ordering, safety properties may be expressed as certain provability problems. These provability problems are shown to be in pspace. They are solvable, with some guidance, by finite proof search in concurrent logic programming environments based on linear logic and acting as sort of model-checkers. One advantage of our approach is that either it provides unsafe runs or it actually establishes safety.  相似文献   
24.
The goal of this work is to present a causation modeling methodology with the ability to accurately infer blood glucose levels using a large set of highly correlated noninvasive input variables over an extended period of time. These models can provide insight to improve glucose monitoring, and glucose regulation through advanced model-based control technologies. The efficacy of this approach is demonstrated using real data from a type 2 diabetic (T2D) subject collected under free-living conditions over a period of 25 consecutive days. The model was identified and tested using eleven variables that included three food variables as well as several activity and stress variables. The model was trained using 20 days of data and validated using 5 days of data. This gave a fitted correlation coefficient of 0.70 and an average absolute error (AAE) (i.e., the average of the absolute values for the measured glucose concentration minus modeled glucose concentration) of 13.3 mg/dL for the validation data. This AAE result was significantly better than the subject’s personal glucose meter AAE of 15.3 mg/dL for replicated measurements.  相似文献   
25.
The research on materials and systems for tunable microwave devices has gained attraction within the last years. The radio frequency characterization and the component design of tunable microwave components based on dielectric ceramics especially barium-strontium-titanate (BST) are presented in this second part, whereas the basic material properties are discussed in detail in the first part. After a short introduction to the processing technology used for the fabrication of tunable components based on a BST thick film, the relations between microwave properties and material properties as well as the microstructure are presented in detail. The design process for tunable microwave components based on BST thick films is described. Especially the considerations related to micro- and macrostructure and their connection are highlighted. The paper closes with two different application examples: a reconfigurable array antenna for satellite communication and varactors for high power applications.  相似文献   
26.
New viruses spread faster than ever and current signature based detection do not protect against these unknown viruses. Behavior based detection is the currently preferred defense against unknown viruses. The drawback of behavior based detection is the ability only to detect specific classes of viruses or have successful detection under certain conditions plus false positives. This paper presents a characterization of virus replication which is the only virus characteristic guaranteed to be consistently present in all viruses. Two detection models based on virus replication are developed, one using operation sequence matching and the other using frequency measures. Regression analysis was generated for both models. A safe list is used to minimize false positives. In our testing using operation sequence matching, over 250 viruses were detected with 43 subsequences. There were minimal false negatives. The replication sequence of just one virus detected 130 viruses, 45% of all tested viruses. Our testing using frequency measures detected all test viruses with no false negatives. The paper shows that virus replication can be identified and used to detect known and unknown viruses.  相似文献   
27.
Formal Analysis of Multiparty Contract Signing   总被引:1,自引:0,他引:1  
We analyze the multiparty contract-signing protocols of Garay and MacKenzie (GM) and of Baum and Waidner (BW). We use a finite-state tool, Mocha, which allows specification of protocol properties in a branching-time temporal logic with game semantics. While our analysis does not reveal any errors in the BW protocol, in the GM protocol we discover serious problems with fairness for four signers and an oversight regarding abuse-freeness for three signers. We propose a complete revision of the GM subprotocols in order to restore fairness.  相似文献   
28.
29.
The interest of space observations of ocean colour for determining variations in phytoplankton distribution and for deriving primary production (via models) has been largely demonstrated by the Coastal Zone Color Scanner (CZCS) which operated from 1978 to 1986. The capabilities of this pioneer sensor, however, were limited both in spectral resolution and radiometric accuracy. The next generation of ocean colour sensors will benefit from major improvements. The Medium Resolution Imaging Spectrometer (MERIS), planned by the European Space Agency (ESA) for the Envisat platform, has been designed to measure radiances in 15 visible and infrared channels. Three infrared channels will allow aerosol characterization, and therefore accurate atmospheric corrections, to be performed for each pixel. For the retrieval of marine parameters, nine channels between 410 and 705nm will be available (as opposed to only four with the CZCS). In coastal waters this should, in principle, allow a separate quantification of different substances (phytoplankton, mineral particles, yellow substance) to be performed. In open ocean waters optically dominated by phytoplankton and their associate detrital matter, the basic information (i.e. the concentration of phytoplanktonic pigments) will be retrieved with improved accuracy due to the increased radiometric performances of MERIS. The adoption of multi-wavelength algorithms could also lead to additional information concerning auxiliary pigments and taxonomic groups. Finally, MERIS will be one of the first sensors to allow measurements of Sun-induced chlorophyll a in vivo fluorescence, which could provide a complementary approach for the assessment of phytoplankton abundance. The development of these next-generation algorithms, however, requires a number of fundamental studies.  相似文献   
30.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号