首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3991篇
  免费   96篇
  国内免费   17篇
电工技术   61篇
综合类   8篇
化学工业   745篇
金属工艺   150篇
机械仪表   130篇
建筑科学   59篇
矿业工程   15篇
能源动力   240篇
轻工业   229篇
水利工程   41篇
石油天然气   8篇
无线电   581篇
一般工业技术   974篇
冶金工业   375篇
原子能技术   34篇
自动化技术   454篇
  2024年   15篇
  2023年   51篇
  2022年   100篇
  2021年   188篇
  2020年   110篇
  2019年   99篇
  2018年   147篇
  2017年   110篇
  2016年   118篇
  2015年   92篇
  2014年   130篇
  2013年   252篇
  2012年   158篇
  2011年   221篇
  2010年   170篇
  2009年   182篇
  2008年   163篇
  2007年   144篇
  2006年   123篇
  2005年   97篇
  2004年   75篇
  2003年   77篇
  2002年   67篇
  2001年   69篇
  2000年   70篇
  1999年   60篇
  1998年   115篇
  1997年   64篇
  1996年   69篇
  1995年   56篇
  1994年   69篇
  1993年   66篇
  1992年   59篇
  1991年   64篇
  1990年   47篇
  1989年   34篇
  1988年   30篇
  1987年   34篇
  1986年   30篇
  1985年   26篇
  1984年   34篇
  1983年   23篇
  1982年   34篇
  1981年   14篇
  1980年   24篇
  1979年   22篇
  1978年   11篇
  1977年   21篇
  1976年   27篇
  1973年   10篇
排序方式: 共有4104条查询结果,搜索用时 15 毫秒
51.
Both unit and integration testing are incredibly crucial for almost any software application because each of them operates a distinct process to examine the product. Due to resource constraints, when software is subjected to modifications, the drastic increase in the count of test cases forces the testers to opt for a test optimization strategy. One such strategy is test case prioritization (TCP). Existing works have propounded various methodologies that re-order the system-level test cases intending to boost either the fault detection capabilities or the coverage efficacy at the earliest. Nonetheless, singularity in objective functions and the lack of dissimilitude among the re-ordered test sequences have degraded the cogency of their approaches. Considering such gaps and scenarios when the meteoric and continuous updations in the software make the intensive unit and integration testing process more fragile, this study has introduced a memetics-inspired methodology for TCP. The proposed structure is first embedded with diverse parameters, and then traditional steps of the shuffled-frog-leaping approach (SFLA) are followed to prioritize the test cases at unit and integration levels. On 5 standard test functions, a comparative analysis is conducted between the established algorithms and the proposed approach, where the latter enhances the coverage rate and fault detection of re-ordered test sets. Investigation results related to the mean average percentage of fault detection (APFD) confirmed that the proposed approach exceeds the memetic, basic multi-walk, PSO, and optimized multi-walk by 21.7%, 13.99%, 12.24%, and 11.51%, respectively.  相似文献   
52.
In this paper we present several new results in the theory of homogeneous multiprocessor scheduling. We start with some assumptions about the behavior of tasks, with associated precedence constraints, as processor power is applied. We assume that as more processors are applied to a task, the time taken to compute it decreases, yielding some speedup. Because of communication, synchronization, and task scheduling overhead, this speedup increases less than linearly with the number of processors applied. We also assume that the number of processors which can be assigned to a task is a continuous variable, with a view to exploiting continuous mathematics. The optimal scheduling problem is to determine the number of processors assigned to each task, and task sequencing, to minimize the finishing time.These assumptions allow us to recast the optimal scheduling problem in a form which can be addressed by optimal control theory. Various theorems can be proven which characterize the optimal scheduling solution. Most importantly, for the special case where the speedup function of each task isp , wherep is the amount of processing power applied to the task, we can directly solve our equations for the optimal solution. In this case, for task graphs formed from parallel and series connections, the solution can be derived by inspection. The solution can also be shown to be shortest path from the initial to the final state, as measured by anl 1/ distance metric, subject to obstacle constraints imposed by the precedence constraints.This research has been funded in part by the Advanced Research Project Agency monitored by ONR under Grant No. N00014-89-J-1489, in part by Draper Laboratory, in part by DARPA Contract No. N00014-87-K-0825, and in part by NSF Grant No. MIP-9012773. The first author is now with AT&T Bell Laboratories and the second author is with BBN Incorporated.  相似文献   
53.
We consider buffer management of unit packets with deadlines for a multi-port device with reconfiguration overhead. The goal is to maximize the throughput of the device, i.e., the number of packets delivered by their deadline. For a single port or with free reconfiguration, the problem reduces to the well-known packets scheduling problem, where the celebrated earliest-deadline-first (EDF) strategy is optimal 1-competitive. However, EDF is not 1-competitive when there is a reconfiguration overhead. We design an online algorithm that achieves a competitive ratio of 1−o(1) when the ratio between the minimum laxity of the packets and the number of ports tends to infinity. This is one of the rare cases where one can design an almost 1-competitive algorithm. One ingredient of our analysis, which may be interesting on its own right, is a perturbation theorem on EDF for the classical packets scheduling problem. Specifically, we show that a small perturbation in the release and deadline times cannot significantly degrade the optimal throughput. This implies that EDF is robust in the sense that its throughput is close to the optimum even when the deadlines are not precisely known.  相似文献   
54.
Computer-aided design (CAD) is a ubiquitous tool that today’s students will be expected to use proficiently for numerous engineering purposes. Taking full advantage of the features available in modern CAD programs requires that models are created in a manner that allows others to easily understand how they are organized and alter them in an efficient and robust manner. The results of a class-based exercise are presented to examine the role of model attributes on model creation, alteration, and student perception. Two popular CAD programs are used for the exercise: SolidWorks and Pro|Engineer. General results from both programs are reported. Fewer more complex features are found to be correlated with reduced modeling time. Simple features are shown to be positively correlated with the number of features retained without change. More complex features are found to be negatively correlated with the number of new features. Student perceptions of model quality and intuitiveness are positively correlated with the amount of feature reuse. Student survey data shows a preference for simpler features, the naming of features, and the use of reference geometry. The results do not allow for a generic approach regarding feature complexity to be prescribed. Overall, properly conveying design intent is shown to be positively correlated with design retention and negatively correlated with alteration time.  相似文献   
55.
A new kind of molecularly imprinted polymer-modified graphite electrode was fabricated by “grafting-to” approach, incorporating sol–gel technique, for the detection of acute deficiency in serum ascorbic acid level (SAAL), manifesting hypovitaminosis C. The modified electrode exhibited ascorbic acid (AA) oxidation at less positive potential (0.0 V) than the earlier reported methods, resulting in a limit of detection as low as 6.13 ng mL−1 (RSD = 1.2%, S/N = 3). The diffusion coefficient (1.096 × 10−5 cm2 s−1), rate constant (7.308 s−1), and Gibb's free energy change (−12.59 kJ mol−1) due to analyte adsorption, were also calculated to explore the kinetics of AA oxidation. The proposed sensor was found to enhance sensitivity substantially so as to detect ultra trace level of AA in the presence of other biologically important compounds (dopamine, uric acid, etc.), without any cross interference and matrix complications from biological fluids and pharmaceutical samples.  相似文献   
56.
A new technique is proposed for scene analysis, called "appearance clustering.” The key result of this approach is that the scene points can be clustered according to their surface normals, even when the geometry, material, and lighting are all unknown. This is achieved by analyzing an image sequence of a scene as it is illuminated by a smoothly moving distant light source. In such a scenario, the brightness measurements at each pixel form a "continuous appearance profile.” When the source path follows an unstructured trajectory (obtained, say, by smoothly hand-waving a light source), the locations of the extrema of the appearance profile provide a strong cue for the scene point's surface normal. Based on this observation, a simple transformation of the appearance profiles and a distance metric are introduced that, together, can be used with any unsupervised clustering algorithm to obtain isonormal clusters of a scene. We support our algorithm empirically with comprehensive simulations of the Torrance-Sparrow and Oren-Nayar analytic BRDFs, as well as experiments with 25 materials obtained from the MERL database of measured BRDFs. The method is also demonstrated on 45 examples from the CURET database, obtaining clusters on scenes with real textures such as artificial grass and ceramic tile, as well as anisotropic materials such as satin and velvet. The results of applying our algorithm to indoor and outdoor scenes containing a variety of complex geometry and materials are shown. As an example application, isonormal clusters are used for lighting-consistent texture transfer. Our algorithm is simple and does not require any complex lighting setup for data collection.  相似文献   
57.
Many real-world domains exhibit rich relational structure and stochasticity and motivate the development of models that combine predicate logic with probabilities. These models describe probabilistic influences between attributes of objects that are related to each other through known domain relationships. To keep these models succinct, each such influence is considered independent of others, which is called the assumption of “independence of causal influences” (ICI). In this paper, we describe a language that consists of quantified conditional influence statements and captures most relational probabilistic models based on directed graphs. The influences due to different statements are combined using a set of combining rules such as Noisy-OR. We motivate and introduce multi-level combining rules, where the lower level rules combine the influences due to different ground instances of the same statement, and the upper level rules combine the influences due to different statements. We present algorithms and empirical results for parameter learning in the presence of such combining rules. Specifically, we derive and implement algorithms based on gradient descent and expectation maximization for different combining rules and evaluate them on synthetic data and on a real-world task. The results demonstrate that the algorithms are able to learn both the conditional probability distributions of the influence statements and the parameters of the combining rules.  相似文献   
58.
Given a single outdoor image, we present a method for estimating the likely illumination conditions of the scene. In particular, we compute the probability distribution over the sun position and visibility. The method relies on a combination of weak cues that can be extracted from different portions of the image: the sky, the vertical surfaces, the ground, and the convex objects in the image. While no single cue can reliably estimate illumination by itself, each one can reinforce the others to yield a more robust estimate. This is combined with a data-driven prior computed over a dataset of 6 million photos. We present quantitative results on a webcam dataset with annotated sun positions, as well as quantitative and qualitative results on consumer-grade photographs downloaded from Internet. Based on the estimated illumination, we show how to realistically insert synthetic 3-D objects into the scene, and how to transfer appearance across images while keeping the illumination consistent.  相似文献   
59.
This paper demonstrates the use of TissueQuant - an image analysis tool for quantification of color intensities which was developed for use in medical research where the stained biological specimen such as tissue or antigen needs to be quantified. TissueQuant provides facilities for user interaction to choose and quantify the color of interest and its shades. Gaussian weighting functions are used to provide a color score which quantifies how close the shade is to the user specified reference color. We describe two studies in medical research which use TissueQuant for quantification. The first study evaluated the effect of petroleum-ether extract of Cissus quadrangularis (CQ) on osteoporotic rats. It was found that the analysis results correlated well with the manual evaluation, p < 0.001. The second study evaluated the nerve morphometry and it was found that the adipose and non adipose tissue content was maximum in radial nerve among the five nerves studied.  相似文献   
60.
Point location is an extremely well-studied problem both in internal memory models and recently also in the external memory model. In this paper, we present an I/O-efficient dynamic data structure for point location in general planar subdivisions. Our structure uses linear space to store a subdivision with N segments. Insertions and deletions of segments can be performed in amortized O(log? B N) I/Os and queries can be answered in $O(\log_{B}^{2} N)$ I/Os in the worst-case. The previous best known linear space dynamic structure also answers queries in $O(\log_{B}^{2} N)$ I/Os, but only supports insertions in amortized $O(\log_{B}^{2} N)$ I/Os. Our structure is also considerably simpler than previous structures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号