首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4389篇
  免费   226篇
  国内免费   9篇
电工技术   54篇
综合类   3篇
化学工业   898篇
金属工艺   84篇
机械仪表   67篇
建筑科学   267篇
矿业工程   4篇
能源动力   195篇
轻工业   468篇
水利工程   30篇
石油天然气   8篇
无线电   382篇
一般工业技术   823篇
冶金工业   416篇
原子能技术   32篇
自动化技术   893篇
  2023年   42篇
  2022年   22篇
  2021年   120篇
  2020年   83篇
  2019年   99篇
  2018年   109篇
  2017年   101篇
  2016年   138篇
  2015年   125篇
  2014年   159篇
  2013年   284篇
  2012年   287篇
  2011年   341篇
  2010年   297篇
  2009年   265篇
  2008年   276篇
  2007年   238篇
  2006年   206篇
  2005年   177篇
  2004年   155篇
  2003年   135篇
  2002年   120篇
  2001年   59篇
  2000年   74篇
  1999年   53篇
  1998年   64篇
  1997年   53篇
  1996年   47篇
  1995年   43篇
  1994年   49篇
  1993年   34篇
  1992年   26篇
  1991年   27篇
  1990年   23篇
  1989年   29篇
  1988年   16篇
  1987年   25篇
  1986年   21篇
  1985年   21篇
  1984年   22篇
  1983年   19篇
  1982年   23篇
  1981年   16篇
  1980年   13篇
  1979年   18篇
  1978年   9篇
  1977年   15篇
  1976年   10篇
  1974年   7篇
  1971年   6篇
排序方式: 共有4624条查询结果,搜索用时 46 毫秒
61.
Distributed video coding (DVC) constitutes an original coding framework to meet the stringent requirements imposed by uplink-oriented and low-power mobile video applications. The quality of the side information available to the decoder and the efficiency of the employed channel codes are primary factors determining the success of a DVC system. This contribution introduces two novel techniques for probabilistic motion compensation in order to generate side information at the Wyner-Ziv decoder. The employed DVC scheme uses a base layer, serving as a hash to facilitate overlapped block motion estimation at the decoder side. On top of the base layer, a supplementary Wyner-Ziv layer is coded in the DCT domain. Both proposed probabilistic motion compensation techniques are driven by the actual correlation channel statistics and reuse information contained in the hash. Experimental results report significant rate savings caused by the novel side information generation methods compared to previous techniques. Moreover, the compression performance of the presented DVC architecture, featuring the proposed side-information generation techniques, delivers state-of-the-art compression performance.  相似文献   
62.
Based on the analysis of “real” Second Life meetings data, in educational and professional settings, our objective is to understand the actual uses of this kind of Virtual World and more particularly, the interactive frames constructed in SL meetings and their interrelation with uses of communication media. The originality of our analytical framework stems from the combination of two perspectives: a third view perspective based on analyses of observational data and a first view perspective based on users’ reports on their experience in SL. Our results highlight: boundaries between serious and recreational registers; avatar’s expression and attribution of feelings to the person “behind”; spatial positioning as indicators and constructors of roles and engagement; management of communication fluidity and joint focus; narrowing of communication media used for task focus content; emerging mediation role for management of fractured exchanges.  相似文献   
63.
Dual-color fluorescence correlation spectroscopy is an interesting method to quantify protein interaction in living cells. But, when performing these experiments, one must compensate for a known spectral bleed through artifact that corrupts cross-correlation data. In this article, problems with crosstalk were overcome with an approach based on fluorescence lifetime correlation spectroscopy (FLCS). We show that FLCS applied to dual-color EGFP and mCherry cross-correlation allows the determination of protein-protein interactions in living cells without the need of spectral bleed through calibration. The methodology was validated by using EGFP-mCherry tandem in comparison with coexpressed EGFP and mCherry in live cell. The dual-color FLCS experimental procedure where the different laser intensities do not have to be controlled during experiment is really very helpful to study quantitatively protein interactions in live sample.  相似文献   
64.
A comparison of broad versus deep auditory menu structures   总被引:1,自引:0,他引:1  
OBJECTIVE: The primary purpose of this experiment was to gain a greater understanding of the utilization of working memory when interacting with a speech-enabled interactive voice response (IVR) system. BACKGROUND: A widely promoted guideline advises limiting IVR menus to five or fewer items because of constraints of the human memory system, commonly citing Miller's (1956) paper. The authors argue that Miller's paper does not, in fact, support this guideline. Furthermore, applying modern theories of working memory leads to the opposite conclusion--that reducing menu length by creating a deeper structure is actually more demanding of users' working memories and leads to poorer performance and satisfaction. METHOD: Participants took a working memory capacity test and then attempted to complete a series of e-mail tasks using one of two IVR designs (functionally equivalent, but one with a broad menu structure and the other with a deep structure). RESULTS: Users of the broad-structure IVR performed better and were more satisfied than users of the deep-structure IVR. Furthermore, this effect was more pronounced for those with low working memory capacity. CONCLUSION: Results indicate that creating a deeper structure is more demanding of working memory resource than the alternative of longer, shallower menus. APPLICATION: This experiment has important practical implications for all systems with auditory menus (particularly IVRs) because it provides empirical evidence refuting a widely promoted design practice.  相似文献   
65.
This paper examines how reflectance spectrometry used in the laboratory to estimate clay and calcium carbonate (CaCO3) soil contents can be applied to field and airborne measurements for soil property mapping. A continuum removal (CR) technique quantifying specific absorption features of clay (2206 nm) and CaCO3 (2341 nm) was applied to laboratory, field and airborne HYMAP reflectance measurements collected in 2003 (33 sites) and 2005 (19 sites) over bare soil sites of a few meters within the La Peyne Valley area, southern France. Nine intermediate stages from the laboratory up to HYMAP sensor measurements were considered for separately evaluating the possible degradation of estimation performances when going across scales and sensors, e.g. radiometric calibration, spectral resolution, spatial variability, illumination conditions, and surface status including roughness, soil moisture and presence and nature of pebbles.Significant relationships were observed between clay and CaCO3 contents and CR values computed respectively at 2206 nm and 2341 nm from reflectance measurements at the laboratory level with an ASD spectrophotometer up to the HYMAP spectro-imaging sensor. Performances of clay and CaCO3 estimations decreased from the laboratory to airborne scales. The main factors inducing uncertainties in the estimates were radiometric and wavelength calibration uncertainties of the HYMAP sensor as well as possible residual atmospheric effects.  相似文献   
66.
Recovering articulated shape and motion, especially human body motion, from video is a challenging problem with a wide range of applications in medical study, sport analysis and animation, etc. Previous work on articulated motion recovery generally requires prior knowledge of the kinematic chain and usually does not concern the recovery of the articulated shape. The non-rigidity of some articulated part, e.g. human body motion with nonrigid facial motion, is completely ignored. We propose a factorization-based approach to recover the shape, motion and kinematic chain of an articulated object with nonrigid parts altogether directly from video sequences under a unified framework. The proposed approach is based on our modeling of the articulated non-rigid motion as a set of intersecting motion subspaces. A motion subspace is the linear subspace of the trajectories of an object. It can model a rigid or non-rigid motion. The intersection of two motion subspaces of linked parts models the motion of an articulated joint or axis. Our approach consists of algorithms for motion segmentation, kinematic chain building, and shape recovery. It handles outliers and can be automated. We test our approach through synthetic and real experiments and demonstrate how to recover articulated structure with non-rigid parts via a single-view camera without prior knowledge of its kinematic chain.  相似文献   
67.
Recent progress in modelling, animation and rendering means that rich, high fidelity virtual worlds are found in many interactive graphics applications. However, the viewer's experience of a 3D world is dependent on the nature of the virtual cinematography, in particular, the camera position, orientation and motion in relation to the elements of the scene and the action. Camera control encompasses viewpoint computation, motion planning and editing. We present a range of computer graphics applications and draw on insights from cinematographic practice in identifying their different requirements with regard to camera control. The nature of the camera control problem varies depending on these requirements, which range from augmented manual control (semi‐automatic) in interactive applications, to fully automated approaches. We review the full range of solution techniques from constraint‐based to optimization‐based approaches, and conclude with an examination of occlusion management and expressiveness in the context of declarative approaches to camera control.  相似文献   
68.
In geographic information retrieval, queries often name geographic regions that do not have a well-defined boundary, such as “Southern France.” We provide two algorithmic approaches to the problem of computing reasonable boundaries of such regions based on data points that have evidence indicating that they lie either inside or outside the region. Our problem formulation leads to a number of subproblems related to red-blue point separation and minimum-perimeter polygons, many of which we solve algorithmically. We give experimental results from our implementation and a comparison of the two approaches. This research is supported by the EU-IST Project No. IST-2001-35047 (SPIRIT) and by grant WO 758/4-2 of the German Research Foundation (DFG).  相似文献   
69.
Today, laypersons often consult the Internet to inform themselves about health-related issues. However, the competent use of these often complex and heterogeneous information provisions cannot be taken for granted, because many Internet users are lacking the necessary metacognitive prerequisites. Therefore, we developed the metacognitive computer-tool met.a.ware, which supports laypersons’ Internet research for medical information by the means of metacognitive prompting and ontological classification. In an experimental investigation of met.a.ware a total of 118 participants with little medical knowledge were asked to conduct an Internet research on a medical topic. Participants were randomly assigned to four experimental groups that worked with met.a.ware and either received evaluation prompts, monitoring prompts, both types of prompts, or no prompts. All experimental conditions were additionally provided with ontological classification. One control group took paper and pencil notes. A further control group took notes using a blank text window. Results showed that laypersons receiving evaluation prompts outperformed controls in terms of knowledge about sources and produced more arguments commenting on the source of information in an essay task. In addition, laypersons receiving monitoring prompts acquired significantly more knowledge about facts, but did not perform better on a comprehension test than the controls. The availability of ontological categories helped to structure the notes laypersons in the conditions working with ontological classification took during Internet research. Analyses of the notes further demonstrated that the availability of ontological categories guided information search in direction of the selected categories. It is concluded, that met.a.ware is an effective tool that supports laypersons’ Internet research.  相似文献   
70.
Sequential fixed-point ICA based on mutual information minimization   总被引:1,自引:0,他引:1  
A new gradient technique is introduced for linear independent component analysis (ICA) based on the Edgeworth expansion of mutual information, for which the algorithm operates sequentially using fixed-point iterations. In order to address the adverse effect of outliers, a robust version of the Edgeworth expansion is adopted, in terms of robust cumulants, and robust derivatives of the Hermite polynomials are used. Also, a new constrained version of ICA is introduced, based on goal programming of mutual information objectives, which is applied to the extraction of the antepartum fetal electrocardiogram from multielectrode cutaneous recordings on the mother's thorax and abdomen.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号