首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2342篇
  免费   199篇
  国内免费   2篇
电工技术   38篇
综合类   8篇
化学工业   738篇
金属工艺   35篇
机械仪表   42篇
建筑科学   130篇
矿业工程   35篇
能源动力   55篇
轻工业   256篇
水利工程   13篇
无线电   171篇
一般工业技术   475篇
冶金工业   136篇
原子能技术   10篇
自动化技术   401篇
  2023年   46篇
  2022年   71篇
  2021年   113篇
  2020年   80篇
  2019年   68篇
  2018年   95篇
  2017年   92篇
  2016年   116篇
  2015年   112篇
  2014年   144篇
  2013年   155篇
  2012年   148篇
  2011年   170篇
  2010年   97篇
  2009年   117篇
  2008年   114篇
  2007年   98篇
  2006年   74篇
  2005年   73篇
  2004年   56篇
  2003年   50篇
  2002年   39篇
  2001年   22篇
  2000年   25篇
  1999年   23篇
  1998年   30篇
  1997年   27篇
  1996年   25篇
  1995年   27篇
  1994年   14篇
  1993年   14篇
  1992年   9篇
  1991年   9篇
  1990年   8篇
  1989年   10篇
  1988年   10篇
  1987年   11篇
  1986年   12篇
  1985年   7篇
  1984年   10篇
  1983年   11篇
  1982年   7篇
  1981年   8篇
  1980年   8篇
  1979年   6篇
  1978年   5篇
  1977年   5篇
  1976年   4篇
  1974年   8篇
  1973年   5篇
排序方式: 共有2543条查询结果,搜索用时 31 毫秒
941.
Dynamic software product lines (DSPLs) propose elaborated design and implementation principles for engineering highly configurable runtime-adaptive systems in a sustainable and feature-oriented way. For this, DSPLs add to classical software product lines (SPL) the notions of (1) staged (pre-)configurations with dedicated binding times for each individual feature, and (2) continuous runtime reconfigurations of dynamic features throughout the entire product life cycle. Especially in the context of safety- and mission-critical systems, the design of reliable DSPLs requires capabilities for accurately specifying and validating arbitrary complex constraints among configuration parameters and/or respective reconfiguration options. Compared to classical SPL domain analysis which is usually based on Boolean constraint solving, DSPL validation, therefore, further requires capabilities for checking temporal properties of reconfiguration processes. In this article, we present a comprehensive approach for modeling and automatically verifying essential validity properties of staged reconfiguration processes with complex binding time constraints during DSPL domain engineering. The novel modeling concepts introduced are motivated by (re-)configuration constraints apparent in a real-world industrial case study from the automation engineering domain, which are not properly expressible and analyzable using state-of-the-art SPL domain modeling approaches. We present a prototypical tool implementation based on the model checker SPIN and present evaluation results obtained from our industrial case study, demonstrating the applicability of the approach.  相似文献   
942.
This study presents the development of post-processing steps for microfluidics fabricated with selective laser etching (SLE) in fused silica. In a first step, the SLE surface—even inner walls of microfluidic channels—can be smoothed by laser polishing. In addition, two-photon polymerization (2PP) can be used to manufacture polymer microstructures and microcomponents inside the microfluidic channels. The reduction in the surface roughness by laser polishing is a remelting process. While heating the glass surface above softening temperature, laser radiation relocates material thanks to the surface tension. With laser polishing, the RMS roughness of SLE surfaces can be reduced from 12 µm down to 3 nm for spatial wavelength λ < 400 µm. Thanks to the laser polishing, fluidic processes as well as particles in microchannels can be observed with microscopy. A manufactured microfluidic demonstrates that SLE and laser polishing can be combined successfully. By developing two-photon polymerization (2PP) processing in microchannels we aim to enable new applications with sophisticated 3D structures inside the microchannel. With 2PP, lenses with a diameter of 50 µm are processed with a form accuracy rms of 70 nm. In addition, this study demonstrates that 3D structures can be fabricated inside the microchannels manufactured with SLE. Thanks to the combination of SLE, laser polishing and 2PP, research is pioneering new applications for microfluidics made of fused silica.  相似文献   
943.
Suppose a user located at a certain vertex in a road network wants to plan a route using a wayfinding map. The user's exact destination may be irrelevant for planning most of the route, because many destinations will be equivalent in the sense that they allow the user to choose almost the same paths. We propose a method to find such groups of destinations automatically and to contract the resulting clusters in a detailed map to achieve a simplified visualization. We model the problem as a clustering problem in rooted, edge‐weighted trees. Two vertices are allowed to be in the same cluster if and only if they share at least a given fraction of their path to the root. We analyze some properties of these clusterings and give a linear‐time algorithm to compute the minimum‐cardinality clustering. This algorithm may have various other applications in network visualization and graph drawing, but in this paper we apply it specifically to focus‐and‐context map generalization. When contracting shortest‐path trees in a geographic network, the computed clustering additionally provides a constant‐factor bound on the detour that results from routing using the generalized network instead of the full network. This is a desirable property for wayfinding maps.  相似文献   
944.
Given a bipartite graph G=(V c ,V t ,E) and a nonnegative integer k, the NP-complete Minimum-Flip Consensus Tree problem asks whether G can be transformed, using up to k edge insertions and deletions, into a graph that does not contain an induced P 5 with its first vertex in V t (a so-called M-graph or Σ-graph). This problem plays an important role in computational phylogenetics, V c standing for the characters and V t standing for taxa. Chen et al. (IEEE/ACM Trans. Comput. Biol. Bioinform. 3:165–173, 2006). showed that Minimum-Flip Consensus Tree is NP-complete and presented a parameterized algorithm with running time O(6 k ?|V t |?|V c |). Subsequently, Böcker et al. (ACM Trans. Algorithms 8:7:1–7:17, 2012) presented a refined search tree algorithm with running time O(4.42 k (|V t |+|V c |)+|V t |?|V c |). We continue the study of Minimum-Flip Consensus Tree parameterized by k. Our main contribution are polynomial-time executable data reduction rules yielding a problem kernel with O(k 3) vertices. In addition, we present an improved search tree algorithm with running time O(3.68 k ?|V c |2|V t |).  相似文献   
945.
In engineering, it is often desirable to find a subset of the set of feasible designs, a solution space, rather than a single solution. A feasible design is defined as a design which does not violate any constraints and has a performance value below a desired threshold. Performance measure, threshold value and constraints depend on the specific problem. For evaluation of a design with respect to feasibility, a model is required which maps the design parameters from the input space onto the performance measures in the output space. In state-of-the-art methodology, iterative sampling is used to generate an estimate of the frontier between feasible and infeasible regions in the input space. By evaluating each sample point with respect to feasibility, areas which contain a large fraction of feasible designs are identified and subsequently resampled. The largest hypercube containing only feasible designs is sought, because this results in independent intervals for each design parameter. Estimating this hypercube with sufficient precision may require a large number of model evaluations, depending on the dimensionality of the input space. In this paper, a novel approach is proposed for modeling the inequality constraints and an objective function in a way for which a linear formulation can be used, independently of the dimensionality of the problem. Thereby the exact solution for the largest feasible hypercube can be calculated at much lower cost than with stochastic sampling as described above, as the problem is reduced to solving a linear system of equations. The method is applied to structural design with respect to the US-NCAP frontal impact. The obtained solution is compared to numerical solutions of an identical system, which are computed using reduced order models and stochastic methods. By this example, the high potential of the new direct method for solution space computation is shown.  相似文献   
946.
947.
It is widely acknowledged that the safe and efficient supervisory control of complex dynamic systems requires that human operators are capable of checking the state of the controlled system against given performance criteria. In addition, it is important to consider how changes in the state of the controlled system and its environment influence the control situation: the possibility of bringing about system state changes by performing control actions on the controlled system (the control possibilities), and the requirements for bringing about appropriate state changes in the controlled system (the control requirements). This paper addresses fundamental problems related to the design of human-machine systems that can track changes in control situations. Based on a theoretical analysis of control actions, a generic structure is proposed for control situations. The issues of how to specify control situations and how to derive changes in the control situation, based on a representation of the work domain, are also addressed, using examples.
Johannes PetersenEmail:
  相似文献   
948.
A significant share of today's Internet traffic is generated by network gaming. This kind of traffic is interesting in regard to it's market potential as well as to it's real time requirements on the network. For the consideration of game traffic in network dimensioning, traffic models are required that allow to generate a characteristic load for analytical or simulative performance evaluation of networks. In this paper the fast action multiplayer game Counter Strike is evaluated based on one month of Internet traffic traces and traffic models for client and server are presented. The paper concludes with remarks on QoS metrics for an adequate assessment of performance evaluation results.  相似文献   
949.
This work presents the combination and acceleration of PCR and fluorescent labelling within a disposable microfluidic chip. The utilised geometry consists of a spiral meander with 40 turns, representing a cyclic-flow PCR system. The used reaction chemistry includes Cy3-conjugated primers leading to a one-step process accelerated by cyclic-flow PCR. DNA of three different bacterial samples (Staphylococcus aureus, Escherichia coli and Pseudomonas aeruginosa) was processed and successfully amplified and labelled with detection limits down to 102 cells per reaction. The specificity of species identification was comparable to the approach of separate PCR and labelling. The overall processing time was decreased from 6 to 1.5 h. We showed that a disposable polycarbonate chip, fabricated by injection moulding is suitable for the significant acceleration of DNA microarray assays. The reaction output led to high-sensitivity bacterial identification in a short time, which is crucial for an early and targeted therapy against infectious diseases.  相似文献   
950.
The most commonly used semiquantitative analysis of protein expression still employs protein separation by denaturing SDS-PAGE with subsequent Western blotting and quantification of the resulting ODs of bands visualized with specific antibodies. However, many questions regarding this procedure are usually ignored, although still in need of answering: Does isolation or separation procedure harm the integrity or affect modifications (e.g., phosphorylation) of the protein of interest? Does denaturation reduce binding of antibodies used for detection? Should denaturation be performed or should a native gel be run? How can artificial degradations or aggregations be distinguished from biological relevant ones? If the antibody detects multiple bands (which is not uncommon), which one(s) should be taken into account for quantification and why? Which loading control protein should be chosen and is it really “housekeeping” and how can this be verified? Is the image acquisition system linear and does it come with a sufficient dynamic range? How to account and control for background staining? This article is intended to address these questions and raise the readers awareness to possible Western blot alternatives in the attempt of minimizing possible pitfalls that might loom anywhere from protein isolation to acquisition of final quantitative data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号