首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   177063篇
  免费   16648篇
  国内免费   9681篇
电工技术   13893篇
技术理论   8篇
综合类   18021篇
化学工业   17413篇
金属工艺   7190篇
机械仪表   15811篇
建筑科学   20381篇
矿业工程   6757篇
能源动力   7826篇
轻工业   13477篇
水利工程   7555篇
石油天然气   8302篇
武器工业   2023篇
无线电   10870篇
一般工业技术   17113篇
冶金工业   7047篇
原子能技术   2520篇
自动化技术   27185篇
  2024年   733篇
  2023年   2187篇
  2022年   4428篇
  2021年   5211篇
  2020年   5500篇
  2019年   4581篇
  2018年   4519篇
  2017年   5507篇
  2016年   6682篇
  2015年   7024篇
  2014年   11414篇
  2013年   11472篇
  2012年   13221篇
  2011年   14554篇
  2010年   10445篇
  2009年   10627篇
  2008年   9981篇
  2007年   11894篇
  2006年   10319篇
  2005年   8682篇
  2004年   7318篇
  2003年   6243篇
  2002年   5038篇
  2001年   4132篇
  2000年   3541篇
  1999年   2956篇
  1998年   2537篇
  1997年   2131篇
  1996年   1757篇
  1995年   1453篇
  1994年   1305篇
  1993年   969篇
  1992年   896篇
  1991年   659篇
  1990年   554篇
  1989年   469篇
  1988年   397篇
  1987年   268篇
  1986年   233篇
  1985年   214篇
  1984年   253篇
  1983年   241篇
  1982年   197篇
  1981年   96篇
  1980年   95篇
  1979年   65篇
  1978年   54篇
  1977年   46篇
  1976年   43篇
  1959年   32篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
981.
Costs are often an important part of the classification process. Cost factors have been taken into consideration in many previous studies regarding decision tree models. In this study, we also consider a cost-sensitive decision tree construction problem. We assume that there are test costs that must be paid to obtain the values of the decision attribute and that a record must be classified without exceeding the spending cost threshold. Unlike previous studies, however, in which records were classified with only a single condition attribute, in this study, we are able to simultaneously classify records with multiple condition attributes. An algorithm is developed to build a cost-constrained decision tree, which allows us to simultaneously classify multiple condition attributes. The experimental results show that our algorithm satisfactorily handles data with multiple condition attributes under different cost constraints.  相似文献   
982.
In this work, neural network-based models involved in hyperspectral image spectra separation are considered. Focus is on how to select the most highly informative samples for effectively training the neural architecture. This issue is addressed here by several new algorithms for intelligent selection of training samples: (1) a border-training algorithm (BTA) which selects training samples located in the vicinity of the hyperplanes that can optimally separate the classes; (2) a mixed-signature algorithm (MSA) which selects the most spectrally mixed pixels in the hyperspectral data as training samples; and (3) a morphological-erosion algorithm (MEA) which incorporates spatial information (via mathematical morphology concepts) to select spectrally mixed training samples located in spatially homogeneous regions. These algorithms, along with other standard techniques based on orthogonal projections and a simple Maximin-distance algorithm, are used to train a multi-layer perceptron (MLP), selected in this work as a representative neural architecture for spectral mixture analysis. Experimental results are provided using both a database of nonlinear mixed spectra with absolute ground truth and a set of real hyperspectral images, collected at different altitudes by the digital airborne imaging spectrometer (DAIS 7915) and reflective optics system imaging spectrometer (ROSIS) operating simultaneously at multiple spatial resolutions.  相似文献   
983.
The quick advance in image/video editing techniques has enabled people to synthesize realistic images/videos conveniently. Some legal issues may arise when a tampered image cannot be distinguished from a real one by visual examination. In this paper, we focus on JPEG images and propose detecting tampered images by examining the double quantization effect hidden among the discrete cosine transform (DCT) coefficients. To our knowledge, our approach is the only one to date that can automatically locate the tampered region, while it has several additional advantages: fine-grained detection at the scale of 8×8 DCT blocks, insensitivity to different kinds of forgery methods (such as alpha matting and inpainting, in addition to simple image cut/paste), the ability to work without fully decompressing the JPEG images, and the fast speed. Experimental results on JPEG images are promising.  相似文献   
984.
While many works consider moving faces only as collections of frames and apply still image-based methods, recent developments indicate that excellent results can be obtained using texture-based spatiotemporal representations for describing and analyzing faces in videos. Inspired by the psychophysical findings which state that facial movements can provide valuable information to face analysis, and also by our recent success in using LBP (local binary patterns) for combining appearance and motion for dynamic texture analysis, this paper investigates the combination of facial appearance (the shape of the face) and motion (the way a person is talking and moving his/her facial features) for face analysis in videos. We propose and study an approach for spatiotemporal face and gender recognition from videos using an extended set of volume LBP features and a boosting scheme. We experiment with several publicly available video face databases and consider different benchmark methods for comparison. Our extensive experimental analysis clearly assesses the promising performance of the LBP-based spatiotemporal representations for describing and analyzing faces in videos.  相似文献   
985.
The performance improvements that can be achieved by classifier selection and by integrating terrain attributes into land cover classification are investigated in the context of rock glacier detection. While exposed glacier ice can easily be mapped from multispectral remote-sensing data, the detection of rock glaciers and debris-covered glaciers is a challenge for multispectral remote sensing. Motivated by the successful use of digital terrain analysis in rock glacier distribution models, the predictive performance of a combination of terrain attributes derived from SRTM (Shuttle Radar Topography Mission) digital elevation models and Landsat ETM+ data for detecting rock glaciers in the San Juan Mountains, Colorado, USA, is assessed. Eleven statistical and machine-learning techniques are compared in a benchmarking exercise, including logistic regression, generalized additive models (GAM), linear discriminant techniques, the support vector machine, and bootstrap-aggregated tree-based classifiers such as random forests. Penalized linear discriminant analysis (PLDA) yields mapping results that are significantly better than all other classifiers, achieving a median false-positive rate (mFPR, estimated by cross-validation) of 8.2% at a sensitivity of 70%, i.e. when 70% of all true rock glacier points are detected. The GAM and standard linear discriminant analysis were second best (mFPR: 8.8%), followed by polyclass. For comparison, the predictive performance of the best three techniques is also evaluated using (1) only terrain attributes as predictors (mFPR: 13.1-14.5% for best three techniques), and (2), only Landsat ETM+ data (mFPR: 19.4-22.7%), yielding significantly higher mFPR estimates at a 70% sensitivity. The mFPR of the worst three classifiers was by about one-quarter higher compared to the best three classifiers, and the combination of terrain attributes and multispectral data reduced the mFPR by more than one-half compared to remote sensing only. These results highlight the importance of combining remote-sensing and terrain data for mapping rock glaciers and other debris-covered ice and choosing the optimal classifier based on unbiased error estimators. The proposed benchmarking methodology is more generally suitable for comparing the utility of remote-sensing algorithms and sensors.  相似文献   
986.
Analysis of low‐level usage data collected in empirical studies of user interaction is well known as a demanding task. Existing techniques for data collection and analysis are either application specific or data‐driven. This paper presents a workspace for data cleaning, transformation and analysis of low‐level usage data that we have developed and reports our experience with it. By its five‐level architecture, the workspace makes a distinction between more general data that typically can be used in initial data analysis and the data answering a specific research question. The workspace was used in four studies and in total 6.5M user actions were collected from 238 participants. The collected data have been proven to be useful for: (i) validating solution times, (ii) validating process conformances, (iii) exploratory studies on program comprehension for understanding use of classes and documents and (iv) testing hypotheses on keystroke latencies. We have found workspace creation to be demanding in time. Particularly demanding were determining the context of actions and dealing with deficiencies. However, once these processes were understood, it was easy to reuse the workspace for different experiments and to extend it to answer new research questions. Based on our experience, we give a set of guidelines that might help in setting up studies, collecting and preparing data. We recommend that designers of data collection instruments add context to each action. Furthermore, we recommend rapid iterations starting early in the process of data preparation and analysis, and covering both general and specific data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
987.
All elliptic curve cryptographic schemes are based on scalar multiplication of points, and hence its faster computation signifies faster operation. This paper proposes two different parallelization techniques to speedup the GF(p) elliptic curve multiplication in affine coordinates and the corresponding architectures. The proposed implementations are capable of resisting different side channel attacks based on time and power analysis. The 160, 192, 224 and 256 bits implementations of both the architectures have been synthesized and simulated for both FPGA and 0.13μ CMOS ASIC. The final designs have been prototyped on a Xilinx Virtex-4 xc4vlx200-12ff1513 FPGA board and performance analyzes carried out. The experimental result and performance comparison show better throughput of the proposed implementations as compared to existing reported architectures.  相似文献   
988.
In this work, point-wise discretization error is bounded via interval approach for the elasticity problem using interval boundary element formulation. The formulation allows for computation of the worst case bounds on the boundary values for the elasticity problem. From these bounds the worst case bounds on the true solution at any point in the domain of the system can be computed. Examples are presented to demonstrate the effectiveness of the treatment of local discretization error in elasticity problem via interval methods.  相似文献   
989.
Consistent segmentation of 3D models   总被引:2,自引:0,他引:2  
This paper proposes a method to segment a set of models consistently. The method simultaneously segments models and creates correspondences between segments. First, a graph is constructed whose nodes represent the faces of every mesh, and whose edges connect adjacent faces within a mesh and corresponding faces in different meshes. Second, a consistent segmentation is created by clustering this graph, allowing for outlier segments that are not present in every mesh. The method is demonstrated for several classes of objects and used for two applications: symmetric segmentation and segmentation transfer.  相似文献   
990.
Acceptance, utility, and usability of system designs have become a focal interest in human–computer interaction (HCI) research, yet at present there is a lack detailed understandings of which system design features influence them. The purpose of the study is to examine the effects of five product design features; customization, adaptive behavior, memory load, content density, and speed on user preference through an experimental study by using conjoint analysis. In experimental study, instead of classical conjoint cards, prototypes were generated for products. Besides, desirability and market segments of product prototypes were identified. In line with the results, among the five product design features, speed is the most and customization is the least important features that affect user preference. Contrary to the expectations, customization has a relatively small importance value in this research. Subsequent design features that influence user preference after speed are minimal memory load, adaptive behavior, and content density, respectively. According to findings, interfaces that have high-speed, minimal memory load, adaptive behavior, low content density, and customization features are more preferable than those that do not.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号