首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1069篇
  免费   66篇
  国内免费   3篇
电工技术   9篇
化学工业   321篇
金属工艺   11篇
机械仪表   22篇
建筑科学   29篇
能源动力   35篇
轻工业   166篇
水利工程   7篇
石油天然气   8篇
无线电   61篇
一般工业技术   110篇
冶金工业   23篇
原子能技术   3篇
自动化技术   333篇
  2023年   8篇
  2022年   47篇
  2021年   86篇
  2020年   40篇
  2019年   41篇
  2018年   41篇
  2017年   44篇
  2016年   57篇
  2015年   45篇
  2014年   49篇
  2013年   77篇
  2012年   74篇
  2011年   83篇
  2010年   42篇
  2009年   54篇
  2008年   50篇
  2007年   40篇
  2006年   33篇
  2005年   32篇
  2004年   22篇
  2003年   15篇
  2002年   17篇
  2001年   11篇
  2000年   7篇
  1999年   11篇
  1998年   8篇
  1997年   11篇
  1996年   3篇
  1995年   5篇
  1994年   5篇
  1993年   7篇
  1992年   10篇
  1991年   7篇
  1990年   9篇
  1989年   7篇
  1988年   12篇
  1987年   6篇
  1986年   3篇
  1985年   1篇
  1984年   1篇
  1983年   3篇
  1982年   4篇
  1981年   2篇
  1978年   1篇
  1977年   1篇
  1976年   1篇
  1975年   1篇
  1974年   1篇
  1973年   1篇
  1971年   1篇
排序方式: 共有1138条查询结果,搜索用时 890 毫秒
21.
S-boxes constitute a cornerstone component in symmetric-key cryptographic algorithms, such as DES and AES encryption systems. In block ciphers, they are typically used to obscure the relationship between the plaintext and the ciphertext. Non-linear and non-correlated S-boxes are the most secure against linear and differential cryptanalysis. In this paper, we focus on a twofold objective: first, we evolve regular S-boxes with high non-linearity and low auto-correlation properties; then automatically generate evolvable hardware for the obtained S-box. Targeting the former, we use a quantum-inspired evolutionary algorithm to optimize regularity, non-linearity and auto-correlation, which constitute the three main desired properties in resilient S-boxes. Pursuing the latter, we exploit the same algorithm to automatically generate the evolvable hardware designs of substitution boxes that minimize hardware space and encryption/decryption time, which form the two main hardware characteristics. We compare our results against existing and well-known designs, which were produced by using conventional methods as well as through genetic algorithm. We will show that our approach provides higher quality S-boxes coding as well as circuits.  相似文献   
22.
Motion study of the hip joint in extreme postures   总被引:2,自引:0,他引:2  
Many causes can be at the origin of hip osteoarthritis (e.g., cam/pincer impingements), but the exact pathogenesis for idiopathic osteoarthritis has not yet been clearly delineated. The aim of the present work is to analyze the consequences of repetitive extreme hip motion on the labrum cartilage. Our hypothesis is that extreme movements can induce excessive labral deformations and lead to early arthritis. To verify this hypothesis, an optical motion capture system is used to estimate the kinematics of patient-specific hip joint, while soft tissue artifacts are reduced with an effective correction method. Subsequently, a physical simulation system is used during motion to compute accurate labral deformations and to assess the global pressure of the labrum, as well as any local pressure excess that may be physiologically damageable. Results show that peak contact pressures occur at extreme hip flexion/abduction and that the pressure distribution corresponds with radiologically observed damage zones in the labrum.
Nadia Magnenat-ThalmannEmail:
  相似文献   
23.
Despite the ability of current GPU processors to treat heavy parallel computation tasks, its use for solving medical image segmentation problems is still not fully exploited and remains challenging. A lot of difficulties may arise related to, for example, the different image modalities, noise and artifacts of source images, or the shape and appearance variability of the structures to segment. Motivated by practical problems of image segmentation in the medical field, we present in this paper a GPU framework based on explicit discrete deformable models, implemented over the NVidia CUDA architecture, aimed for the segmentation of volumetric images. The framework supports the segmentation in parallel of different volumetric structures as well as interaction during the segmentation process and real-time visualization of the intermediate results. Promising results in terms of accuracy and speed on a real segmentation experiment have demonstrated the usability of the system.  相似文献   
24.
There are several neural network implementations using either software, hardware-based or a hardware/software co-design. This work proposes a hardware architecture to implement an artificial neural network (ANN), whose topology is the multilayer perceptron (MLP). In this paper, we explore the parallelism of neural networks and allow on-the-fly changes of the number of inputs, number of layers and number of neurons per layer of the net. This reconfigurability characteristic permits that any application of ANNs may be implemented using the proposed hardware. In order to reduce the processing time that is spent in arithmetic computation, a real number is represented using a fraction of integers. In this way, the arithmetic is limited to integer operations, performed by fast combinational circuits. A simple state machine is required to control sums and products of fractions. Sigmoid is used as the activation function in the proposed implementation. It is approximated by polynomials, whose underlying computation requires only sums and products. A theorem is introduced and proven so as to cover the arithmetic strategy of the computation of the activation function. Thus, the arithmetic circuitry used to implement the neuron weighted sum is reused for computing the sigmoid. This resource sharing decreased drastically the total area of the system. After modeling and simulation for functionality validation, the proposed architecture synthesized using reconfigurable hardware. The results are promising.  相似文献   
25.
Network-on-chip (NoC) are considered the next generation of communication infrastructure in embedded systems. In the platform-based design methodology, an application is implemented by a set of collaborative intellectual property (IP) blocks. The selection of the most suited set of IPs as well as their physical mapping onto the NoC infrastructure to implement efficiently the application at hand are two hard combinatorial problems that occur during the synthesis process of Noc-based embedded system implementation. In this paper, we propose an innovative preference-based multi-objective evolutionary methodology to perform the assignment and mapping stages. We use one of the well-known and efficient multi-objective evolutionary algorithms NSGA-II and microGA as a kernel. The optimization processes of assignment and mapping are both driven by the minimization of the required silicon area and imposed execution time of the application, considering that the decision maker’s preference is a pre-specified value of the overall power consumption of the implementation.  相似文献   
26.
We are developing an instrument, the Geometry Measuring Machine (GEMM), to measure the profile errors of aspheric and free form optical surfaces, with measurement uncertainties near 1 nm. Using GEMM, an optical profile is reconstructed from local curvatures of a surface, which are measured at points on the optic’s surface. We will describe a prototype version of GEMM, its repeatability with time, a measurements registry practice, and the calibration practice needed to make nanometer resolution comparisons with other instruments. Over three months, the repeatability of GEMM is 3 nm rms, and is based on the constancy of the measured profile of an elliptical mirror with a radius of curvature of about 83 m. As a demonstration of GEMM’s capabilities for curvature measurement, profiles of that same mirror were measured with GEMM and the NIST Moore M-48 coordinate measuring machine. Although the methods are far different, two reconstructed profiles differ by 22 nm peak-to-valley, or 6 nm rms. This comparability clearly demonstrates that with appropriate calibration, our prototype of the GEMM can measure complex-shaped optics.  相似文献   
27.
Two-layer schemes provide an effective method of encoding high dynamic range images with backward compatibility. The first layer is the tone-mapped low dynamic range version of the original image, used for visualization. The residual information that cannot be preserved in the first layer is stored in the second layer, which itself is generally encoded as an image of a fixed bit-depth. Any further details that cannot be preserved in the second layer are discarded. In this paper, we present a nonlinear quantization algorithm that can significantly enhance the amount of details that can be preserved in the second layer, and therefore improve the encoding efficiency. The proposed technique can be incorporated in any existing two-layer encoding method and leads to significant improvement in their performance.  相似文献   
28.
29.
Most Electronic waste (e-waste) ends up in landfills while some is recycled. A major site for e-waste recycling in Palestine is the village of Idhna in the Hebron District and most of this waste originates from Israel. The objective of this study was to evaluate the effects of e-waste on human DNA damage and chromosome breaks. The test sample was 46 non-smoker individuals with direct exposure to e-waste, either employed in the workshops or resident in Idhna. Genotoxicity data were compared with a control sample of sixteen unexposed individuals from Bethlehem and Al-Aizariya (Bethany). DNA damage was evaluated using the Comet assay while chromosome aberrations were tested by using conventional cytogenetic techniques. We noted an average of 4.83 aberration/cell/subject in test samples while in controls the average was 0.75. Chromosome aberration frequency was statistically different between exposed and control samples for total aberrations, for chromatid and chromosome breaks, and for formation of rings but not for dicenterics and tetraploidy. The Comet assay likewise showed that there was significant difference between exposed and control samples for DNA damage (p < 0.05). We therefore recommend measures to mitigate the health impact of e-waste recycling.  相似文献   
30.
We present a hybrid model for content extraction from HTML documents. The model operates on Document Object Model (DOM) tree of the corresponding HTML document. It evaluates each tree node and associated statistical features like link density and text distribution across the node to predict significance of the node towards overall content provided by the document. Once significance of the nodes is determined, the formatting characteristics like fonts, styles and the position of the nodes are evaluated to identify the nodes with similar formatting as compared to the significant nodes. The proposed hybrid model is derived from two different models, i.e., one is based on statistical features and other on formatting characteristics and achieved the best accuracy. We describe the validity of model with the help of experiments conducted on the standard data sets. The results revealed that the proposed model outperformed other existing content extraction models. We present a browser based implementation of the proposed model as proof of concept and compare the implementation strategy with various state of art implementations. We also discuss various applications of the proposed model with special emphasis on open source intelligence.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号