首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4456篇
  免费   220篇
  国内免费   10篇
电工技术   54篇
综合类   3篇
化学工业   947篇
金属工艺   86篇
机械仪表   67篇
建筑科学   268篇
矿业工程   4篇
能源动力   196篇
轻工业   471篇
水利工程   30篇
石油天然气   9篇
无线电   384篇
一般工业技术   822篇
冶金工业   416篇
原子能技术   32篇
自动化技术   897篇
  2023年   44篇
  2022年   66篇
  2021年   124篇
  2020年   83篇
  2019年   99篇
  2018年   109篇
  2017年   101篇
  2016年   138篇
  2015年   125篇
  2014年   159篇
  2013年   284篇
  2012年   287篇
  2011年   341篇
  2010年   295篇
  2009年   265篇
  2008年   276篇
  2007年   238篇
  2006年   206篇
  2005年   177篇
  2004年   155篇
  2003年   135篇
  2002年   120篇
  2001年   59篇
  2000年   74篇
  1999年   53篇
  1998年   64篇
  1997年   53篇
  1996年   55篇
  1995年   43篇
  1994年   51篇
  1993年   34篇
  1992年   27篇
  1991年   27篇
  1990年   23篇
  1989年   29篇
  1988年   17篇
  1987年   25篇
  1986年   21篇
  1985年   21篇
  1984年   22篇
  1983年   19篇
  1982年   23篇
  1981年   16篇
  1980年   13篇
  1979年   18篇
  1978年   9篇
  1977年   15篇
  1976年   10篇
  1974年   7篇
  1971年   6篇
排序方式: 共有4686条查询结果,搜索用时 15 毫秒
991.
Recent years have witnessed extensive studies of graph classification due to the rapid increase in applications involving structural data and complex relationships. To support graph classification, all existing methods require that training graphs should be relevant (or belong) to the target class, but cannot integrate graphs irrelevant to the class of interest into the learning process. In this paper, we study a new universum graph classification framework which leverages additional “non-example” graphs to help improve the graph classification accuracy. We argue that although universum graphs do not belong to the target class, they may contain meaningful structure patterns to help enrich the feature space for graph representation and classification. To support universum graph classification, we propose a mathematical programming algorithm, ugBoost, which integrates discriminative subgraph selection and margin maximization into a unified framework to fully exploit the universum. Because informative subgraph exploration in a universum setting requires the search of a large space, we derive an upper bound discriminative score for each subgraph and employ a branch-and-bound scheme to prune the search space. By using the explored subgraphs, our graph classification model intends to maximize the margin between positive and negative graphs and minimize the loss on the universum graph examples simultaneously. The subgraph exploration and the learning are integrated and performed iteratively so that each can be beneficial to the other. Experimental results and comparisons on real-world dataset demonstrate the performance of our algorithm.  相似文献   
992.
Dictionaries are very useful objects for data analysis, as they enable a compact representation of large sets of objects through the combination of atoms. Dictionary‐based techniques have also particularly benefited from the recent advances in machine learning, which has allowed for data‐driven algorithms to take advantage of the redundancy in the input dataset and discover relations between objects without human supervision or hard‐coded rules. Despite the success of dictionary‐based techniques on a wide range of tasks in geometric modeling and geometry processing, the literature is missing a principled state‐of‐the‐art of the current knowledge in this field. To fill this gap, we provide in this survey an overview of data‐driven dictionary‐based methods in geometric modeling. We structure our discussion by application domain: surface reconstruction, compression, and synthesis. Contrary to previous surveys, we place special emphasis on dictionary‐based methods suitable for 3D data synthesis, with applications in geometric modeling and design. Our ultimate goal is to enlight the fact that these techniques can be used to combine the data‐driven paradigm with design intent to synthesize new plausible objects with minimal human intervention. This is the main motivation to restrict the scope of the present survey to techniques handling point clouds and meshes, making use of dictionaries whose definition depends on the input data, and enabling shape reconstruction or synthesis through the combination of atoms.  相似文献   
993.
To describe the interfacial dynamics between two phases using the phase-field method, the interfacial region needs to be close enough to a sharp interface so as to reproduce the correct physics. Due to the high gradients of the solution within the interfacial region and consequent high computational cost, the use of the phase-field method has been limited to the small-scale problems whose characteristic length is similar to the interfacial thickness. By using finer mesh at the interface and coarser mesh in the rest of computational domain, the phase-field methods can handle larger scale of problems with realistic interface thicknesses. In this work, a C1 continuous h-adaptive mesh refinement technique with the least-squares spectral element method is presented. It is applied to the Navier–Stokes-Cahn–Hilliard (NSCH) system and the isothermal Navier–Stokes–Korteweg (NSK) system. Hermite polynomials are used to give global differentiability in the approximated solution, and a space–time coupled formulation and the element-by-element technique are implemented. Two refinement strategies based on the solution gradient and the local error estimators are suggested, and they are compared in two numerical examples.  相似文献   
994.
Reservoir computing is a bio-inspired computing paradigm for processing time dependent signals. The performance of its analogue implementations matches other digital algorithms on a series of benchmark tasks. Their potential can be further increased by feeding the output signal back into the reservoir, which would allow to apply the algorithm to time series generation. This requires, in principle, implementing a sufficiently fast readout layer for real-time output computation. Here we achieve this with a digital output layer driven by a FPGA chip. We demonstrate the first opto-electronic reservoir computer with output feedback and test it on two examples of time series generation tasks: frequency and random pattern generation. We obtain very good results on the first task, similar to idealised numerical simulations. The performance on the second one, however, suffers from the experimental noise. We illustrate this point with a detailed investigation of the consequences of noise on the performance of a physical reservoir computer with output feedback. Our work thus opens new possible applications for analogue reservoir computing and brings new insights on the impact of noise on the output feedback.  相似文献   
995.
996.
997.
A 2-Step sinter/anneal treatment has been reported previously for forming porous CPP as biodegradable bone substitutes [9]. During the 2-Step annealing treatment, the heat treatment used strongly affected the rate of CPP degradation in vitro. In the present study, x-ray diffraction and 31P solid state nuclear magnetic resonance were used to determine the phases that formed using different heat treating processes. The effect of in vitro degradation (in PBS at 37 °C, pH 7.1 or 4.5) was also studied. During CPP preparation, β-CPP and γ-CPP were identified in powders formed from a calcium monobasic monohydrate precursor after an initial calcining treatment (10 h at 500 °C). Melting of this CPP powder (at 1100 °C), quenching and grinding formed amorphous CPP powders. Annealing powders at 585 °C (Step-1) resulted in rapid sintering to form amorphous porous CPP. Continued annealing to 650 °C resulted in crystallization to form a multi-phase structure of β-CPP primarily plus lesser amounts of α-CPP, calcium ultra-phosphates and retained amorphous CPP. Annealing above 720 °C and up to 950 °C transformed this to β-CPP phase. In vitro degradation of the 585 °C (Step-1 only) and 650 °C Step-2 annealed multi-phase samples occurred significantly faster than the β-CPP samples formed by Step-2 annealing at or above 720 °C. This faster degradation was attributable to preferential degradation of thermodynamically less stable phases that formed in samples annealed at 650 °C (i.e. α-phase, ultra-phosphate and amorphous CPP). Degradation in lower pH solutions significantly increased degradation rates of the 585 and 650 °C annealed samples but had no significant effect on the β-CPP samples.  相似文献   
998.
This study evaluates the potential of object-based image analysis in combination with supervised machine learning to identify urban structure type patterns from Landsat Thematic Mapper (TM) images. The main aim is to assess the influence of several critical choices commonly made during the training stage of a learning machine on the classification performance and to give recommendations for classifier-dependent intelligent training. Particular emphasis is given to assess the influence of size and class distribution of the training data, the approach of training data sampling (user-guided or random) and the type of training samples (squares or segments) on the classification performance of a Support Vector Machine (SVM). Different feature selection algorithms are compared and segmentation and classifier parameters are dynamically tuned for the specific image scene, classification task, and training data. The performance of the classifier is measured against a set of reference data sets from manual image interpretation and furthermore compared on the basis of landscape metrics to a very high resolution reference classification derived from light detection and ranging (lidar) measurements. The study highlights the importance of a careful design of the training stage and dynamically tuned classifier parameters, especially when dealing with noisy data and small training data sets. For the given experimental set-up, the study concludes that given optimized feature space and classifier parameters, training an SVM with segment-shaped samples that were sampled in a guided manner and are balanced between the classes provided the best classification results. If square-shaped samples are used, a random sampling provided better results than a guided selection. Equally balanced sample distributions outperformed unbalanced training sets.  相似文献   
999.
To understand how topology shapes the dynamics in excitable networks is one of the fundamental problems in network science when applied to computational systems biology and neuroscience. Recent advances in the field discovered the influential role of two macroscopic topological structures, namely hubs and modules. We propose a visual analytics approach that allows for a systematic exploration of the role of those macroscopic topological structures on the dynamics in excitable networks. Dynamical patterns are discovered using the dynamical features of excitation ratio and co‐activation. Our approach is based on the interactive analysis of the correlation of topological and dynamical features using coordinated views. We designed suitable visual encodings for both the topological and the dynamical features. A degree map and an adjacency matrix visualization allow for the interaction with hubs and modules, respectively. A barycentric‐coordinates layout and a multi‐dimensional scaling approach allow for the analysis of excitation ratio and co‐activation, respectively. We demonstrate how the interplay of the visual encodings allows us to quickly reconstruct recent findings in the field within an interactive analysis and even discovered new patterns. We apply our approach to network models of commonly investigated topologies as well as to the structural networks representing the connectomes of different species. We evaluate our approach with domain experts in terms of its intuitiveness, expressiveness, and usefulness.  相似文献   
1000.
Declarative systems aim at solving tasks by running inference engines on a specification, to free their users from having to specify how a task should be tackled. In order to provide such functionality, declarative systems themselves apply complex reasoning techniques, and, as a consequence, the development of such systems can be laborious work. In this paper, we demonstrate that the declarative approach can be applied to develop such systems, by tackling the tasks solved inside a declarative system declaratively. In order to do this, a meta-level representation of those specifications is often required. Furthermore, by using the language of the system for the meta-level representation, it opens the door to bootstrapping: an inference engine can be improved using the inference it performs itself.One such declarative system is the IDP knowledge base system, based on the language \(\rm FO(\cdot)^{\rm IDP}\), a rich extension of first-order logic. In this paper, we discuss how \(\rm FO(\cdot)^{\rm IDP}\) can support meta-level representations in general and which language constructs make those representations even more natural. Afterwards, we show how meta-\(\rm FO(\cdot)^{\rm IDP}\) can be applied to bootstrap its model expansion inference engine. We discuss the advantages of this approach: the resulting program is easier to understand, easier to maintain, and more flexible.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号