全文获取类型
收费全文 | 598篇 |
免费 | 19篇 |
专业分类
电工技术 | 4篇 |
化学工业 | 166篇 |
金属工艺 | 13篇 |
机械仪表 | 5篇 |
建筑科学 | 16篇 |
矿业工程 | 1篇 |
能源动力 | 13篇 |
轻工业 | 101篇 |
水利工程 | 1篇 |
石油天然气 | 3篇 |
无线电 | 30篇 |
一般工业技术 | 100篇 |
冶金工业 | 105篇 |
原子能技术 | 2篇 |
自动化技术 | 57篇 |
出版年
2023年 | 6篇 |
2022年 | 13篇 |
2021年 | 20篇 |
2020年 | 12篇 |
2019年 | 16篇 |
2018年 | 14篇 |
2017年 | 20篇 |
2016年 | 15篇 |
2015年 | 15篇 |
2014年 | 20篇 |
2013年 | 35篇 |
2012年 | 37篇 |
2011年 | 47篇 |
2010年 | 18篇 |
2009年 | 28篇 |
2008年 | 25篇 |
2007年 | 20篇 |
2006年 | 15篇 |
2005年 | 14篇 |
2004年 | 10篇 |
2003年 | 18篇 |
2002年 | 14篇 |
2001年 | 8篇 |
2000年 | 2篇 |
1999年 | 7篇 |
1998年 | 37篇 |
1997年 | 15篇 |
1996年 | 20篇 |
1995年 | 12篇 |
1994年 | 6篇 |
1993年 | 15篇 |
1992年 | 7篇 |
1990年 | 4篇 |
1989年 | 7篇 |
1988年 | 2篇 |
1987年 | 5篇 |
1986年 | 5篇 |
1985年 | 5篇 |
1984年 | 2篇 |
1982年 | 4篇 |
1981年 | 2篇 |
1978年 | 2篇 |
1976年 | 3篇 |
1973年 | 1篇 |
1972年 | 1篇 |
1970年 | 1篇 |
1965年 | 1篇 |
1964年 | 1篇 |
1963年 | 1篇 |
1960年 | 2篇 |
排序方式: 共有617条查询结果,搜索用时 15 毫秒
1.
Isabel Vega Emiliano Fernández Carmen Mijangos Norma D'Accorso Daniel López 《应用聚合物科学杂志》2008,110(2):695-700
The authors report on the viscoelastic characterization of guar hydrogels obtained through complexation reactions with borax ions. These gels are compared with hydrogels obtained from poly(vinyl alcohol) of different degree of hydrolysis through complexation reactions with congo red. The effect of the degree of hydrolysis and both, the concentration of PVA and the concentration of congo red, on the viscoelastic properties of the hydrogels is analyzed. The potential use of the PVA‐based hydrogels as hydraulic fracturing liquids is discussed in relation to the commonly used fracturing liquid based on the guar–borax system. © 2008 Wiley Periodicals, Inc. J Appl Polym Sci, 2008 相似文献
2.
Luis Armando Rosas Rivera Norma F. Hubele PhD Frederick P. Lawrence PhD 《Computers & Industrial Engineering》1995,29(1-4):55-58
Process capability indices (PCIs) are used in industry to assess percentages of nonconforming parts. An underlying assumption is that the output process measurements are distributed as normal random variables. When normal distributions are assumed, but different distributions are present - such as skew, heavy-tailed, and short-tailed distributions - the percentages of nonconforming parts are significantly different than the computed PCIs indicate. Data arising from nonnormal distributions can sometimes be transformed to conform to the normality assumption and the PCI's computed for the transformed data. In this paper, the effect of the transformation on the estimate of nonconforming parts is examined for three examples of nonnormal distributions - gamma, lognormal, and Weibull. The results of this experimental analysis suggest that data transformation can be useful for estimating an interval for Cpk values and the number of nonconforming parts. 相似文献
3.
Juan Francisco Gómez-Lopera José Martínez-Aroza Aureliano M. Robles-Pérez Ramón Román-Roldán 《Journal of Mathematical Imaging and Vision》2000,13(1):35-56
This work constitutes a theoretical study of the edge-detection method by means of the Jensen-Shannon divergence, as proposed by the authors. The overall aim is to establish formally the suitability of the procedure of edge detection in digital images, as a step prior to segmentation. In specific, an analysis is made not only of the properties of the divergence used, but also of the method's sensitivity to the spatial variation, as well as the detection-error risk associated with the operating conditions due to the randomness of the spatial configuration of the pixels. Although the paper deals with the procedure based on the Jensen-Shannon divergence, some problems are also related to other methods based on local detection with a sliding window, and part of the study is focused to noisy and textured images. 相似文献
4.
Paola C. Bello-Medina Karina Corona-Cervantes Norma Gabriela Zavala Torres Antonio Gonzlez Marcel Prez-Morales Diego A. Gonzlez-Franco Astrid Gmez Jaime García-Mena Sofía Díaz-Cintra Gustavo Pacheco-Lpez 《International journal of molecular sciences》2022,23(15)
Alzheimer’s disease (AD) is a multifactorial pathology characterized by β-amyloid (Aβ) deposits, Tau hyperphosphorylation, neuroinflammatory response, and cognitive deficit. Changes in the bacterial gut microbiota (BGM) have been reported as a possible etiological factor of AD. We assessed in offspring (F1) 3xTg, the effect of BGM dysbiosisdysbiosis in mothers (F0) at gestation and F1 from lactation up to the age of 5 months on Aβ and Tau levels in the hippocampus, as well as on spatial memory at the early symptomatic stage of AD. We found that BGM dysbiosisdysbiosis with antibiotics (Abx) treatment in F0 was vertically transferred to their F1 3xTg mice, as observed on postnatal day (PD) 30 and 150. On PD150, we observed a delay in spatial memory impairment and Aβ deposits, but not in Tau and pTau protein in the hippocampus at the early symptomatic stage of AD. These effects are correlated with relative abundance of bacteria and alpha diversity, and are specific to bacterial consortia. Our results suggest that this specific BGM could reduce neuroinflammatory responses related to cerebral amyloidosis and cognitive deficit and activate metabolic pathways associated with the biosynthesis of triggering or protective molecules for AD. 相似文献
5.
6.
Competitive routing in multiuser communication networks 总被引:1,自引:0,他引:1
The authors consider a communication network shared by several selfish users. Each user seeks to optimize its own performance by controlling the routing of its given flow demand, giving rise to a noncooperative game. They investigate the Nash equilibrium of such systems. For a two-node multiple links system, uniqueness of the Nash equilibrium is proven under reasonable convexity conditions. It is shown that this Nash equilibrium point possesses interesting monotonicity properties. For general networks, these convexity conditions are not sufficient for guaranteeing uniqueness, and a counterexample is presented. Nonetheless, uniqueness of the Nash equilibrium for general topologies is established under various assumptions 相似文献
7.
Bar-David I. Plotnik E. Rom R. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1993,39(5):1671-1675
Consider M-Choose-T communications: T users or less, out of M potential users, are chosen at random to simultaneously transmit binary data over a common channel. A method for constructing codes that achieve error-free M-Choose-T communication over the noiseless adder channel (AC), at a nominal rate of 1/T bits per channel symbol per active user, is described and an efficient decoding procedure is presented. The use of such codes is referred to as forward collision resolution (FCR), as it enables correct decoding of collided messages without retransmissions. For any given T a code is available that yields a stable throughput arbitrarily close to 1 message/slot. Furthermore, if the occurrence of collisions is made known to the transmitters, such a throughput can be maintained for arbitrary T,T⩽M as well. If such feedback is not available, and T is random, the probability of an unresolved collision is significantly smaller than the probability of a collision in an uncoded system, at comparable message-arrival and information rates 相似文献
8.
Román Mozuelos Yolanda Lechuga Mar Martínez Salvador Bracho 《Journal of Electronic Testing》2011,27(2):177-192
This paper presents a test method based on the analysis of the dynamic power supply current, both quiescent and transient,
of the circuit under test. In an off-chip measurement, the global interconnect impedance associated with the chip package
and the test equipment and, also, the chip input/output cells will complicate the extraction of the information provided by
the current waveform of the circuit under test. Thus, the supply current is measured on-chip by a built-in current sensor
integrated in the die itself. To avoid the effective reduction of the voltage supply, the measurement is performed in parallel
by replicating the current that flows through selected branches of the analog circuit. With the aim of reducing the test equipment
requirements, the built-in current sensor output generates digital level pulses whose width is related to the amplitude and
duration of the circuit current transients. In this way the defective circuit is exposed by comparing the digital signature
of the circuit under test with the expected one for the fault-free circuit. A fault evaluation has been carried out to check
the efficiency of the proposed test method. It uses a fault model that considers catastrophic and parametric faults at transistor
level. Two benchmark circuits have been fabricated to experimentally verify the defect detection by the built-in current sensor.
One is an operational amplifier; the other is a structure of switched current cells that belongs to an analog-to-digital converter. 相似文献
9.
José Martínez-Aroza Ramón Román-Roldán 《Multidimensional Systems and Signal Processing》1995,6(1):7-35
A multiresolution analysis of digital gray-level images is presented. A gray-level multi-scale framework is determined from two main assumptions: the gray scale is binary at the finest spatial resolution, and the gray levels of composed regions are obtained additively. In order to interrelate the gray-level histograms of the same image at different resolutions, probabilistic linear models are developed, which are then applied for estimation. Linear-optimization theory is used as a way of constructing such models. A general procedure for image processing is sketched, based on gray-level estimation. A versatile algorithm for nonlinear filtering is derived. Some examples of prospective applications are given.This work was partially supported by grant TIC91-646 from the DGYCIT of the Spanish Government. 相似文献
10.
Analysis of discarding policies in high-speed networks 总被引:4,自引:0,他引:4
Networked applications generate messages that are segmented into smaller, fixed or variable size packets, before they are sent through the network. In high-speed networks, acknowledging individual packets is impractical; so when congestion builds up and packets have to be dropped, entire messages are lost. For a message to be useful, all packets comprising it must arrive successfully at the destination. The problem is therefore which packets to discard so that as many complete messages are delivered, and so that congestion is alleviated or avoided altogether. Selective discarding policies, as a means for congestion avoidance, are studied and compared to nondiscarding policies. The partial message discard policy discards packets of tails of corrupted messages. An improvement to this policy is the early message discard that drops entire messages and not just message tails. A common performance measure of network elements is the effective throughput which measures the utilization of the network links but which ignores the application altogether. We adopt a new performance measure-goodput-which reflects the utilization of the network from the application's point of view and thus better describes network behavior. We develop and analyze a model for systems which employ discarding policies. The analysis shows a remarkable performance improvement when any message-based discarding policy is applied, and that the early message discard policy performs better than the others, especially under high load. We compute the optimal parameter setting for maximum goodput at different input loads, and investigate the performance sensitivity to these parameters 相似文献