全文获取类型
收费全文 | 680篇 |
免费 | 34篇 |
专业分类
电工技术 | 9篇 |
化学工业 | 143篇 |
金属工艺 | 1篇 |
机械仪表 | 14篇 |
建筑科学 | 60篇 |
矿业工程 | 1篇 |
能源动力 | 20篇 |
轻工业 | 101篇 |
水利工程 | 25篇 |
石油天然气 | 2篇 |
无线电 | 50篇 |
一般工业技术 | 97篇 |
冶金工业 | 17篇 |
原子能技术 | 3篇 |
自动化技术 | 171篇 |
出版年
2023年 | 2篇 |
2022年 | 7篇 |
2021年 | 20篇 |
2020年 | 18篇 |
2019年 | 18篇 |
2018年 | 19篇 |
2017年 | 29篇 |
2016年 | 28篇 |
2015年 | 14篇 |
2014年 | 39篇 |
2013年 | 76篇 |
2012年 | 44篇 |
2011年 | 47篇 |
2010年 | 39篇 |
2009年 | 28篇 |
2008年 | 40篇 |
2007年 | 26篇 |
2006年 | 24篇 |
2005年 | 22篇 |
2004年 | 17篇 |
2003年 | 10篇 |
2002年 | 9篇 |
2001年 | 6篇 |
2000年 | 7篇 |
1999年 | 6篇 |
1998年 | 9篇 |
1997年 | 6篇 |
1996年 | 7篇 |
1995年 | 6篇 |
1994年 | 8篇 |
1993年 | 6篇 |
1992年 | 5篇 |
1991年 | 5篇 |
1990年 | 4篇 |
1989年 | 2篇 |
1988年 | 7篇 |
1987年 | 2篇 |
1986年 | 2篇 |
1985年 | 8篇 |
1984年 | 6篇 |
1983年 | 5篇 |
1982年 | 5篇 |
1981年 | 4篇 |
1980年 | 4篇 |
1979年 | 4篇 |
1978年 | 2篇 |
1977年 | 4篇 |
1975年 | 2篇 |
1974年 | 3篇 |
1973年 | 2篇 |
排序方式: 共有714条查询结果,搜索用时 0 毫秒
71.
Aguor EN Arslan F van de Kolk CW Nederhoff MG Doevendans PA van Echteld CJ Pasterkamp G Strijkers GJ 《Magma (New York, N.Y.)》2012,25(5):369-379
Object
Imaging of myocardial infarct composition is essential to assess efficacy of emerging therapeutics. T 2 * mapping has the potential to image myocardial hemorrhage and fibrosis by virtue of its short T 2 * . We aimed to quantify T 2 * in acute and chronic myocardial ischemia/reperfusion (I/R) injury in mice.Materials and methods
I/R-injury was induced in C57BL/6 mice (n?=?9). Sham-operated mice (n?=?8) served as controls. MRI was performed at baseline, and 1, 7 and 28?days after surgery. MRI at 9.4?T consisted of Cine, T 2 * mapping and late-gadolinium-enhancement (LGE). Mice (n?=?6) were histologically assessed for hemorrhage and collagen in the fibrotic scar.Results
Baseline T 2 * values were 17.1?±?2.0?ms. At day 1, LGE displayed a homogeneous infarct enhancement. T 2 * in infarct (12.0?±?1.1?ms) and remote myocardium (13.9?±?0.8?ms) was lower than at baseline. On days 7 and 28, LGE was heterogeneous. T 2 * in the infarct decreased to 7.9?±?0.7 and 6.4?±?0.7?ms, whereas T 2 * values in the remote myocardium were 14.2?±?1.1 and 15.6?±?1.0?ms. Histology revealed deposition of iron and collagen in parallel with decreased T 2 * .Conclusion
T 2 * values are dynamic during infarct development and decrease significantly during scar maturation. In the acute phase, T 2 * values in infarcted myocardium differ significantly from those in the chronic phase. T 2 * mapping was able to confirm the presence of a chronic infarction in cases where LGE was inconclusive. Hence, T 2 * may be used to discriminate between acute and chronic infarctions. 相似文献72.
Wim Kellens Wouter Vanneuville Els Verfaillie Ellen Meire Pieter Deckers Philippe De Maeyer 《Water Resources Management》2013,27(10):3585-3606
This paper presents the state of the art of flood risk management in Flanders, a low-lying region in the northern part of Belgium which is vulnerable to flooding. Possible flood hazard sources are not only the many rivers which pass through the Flemish inland, but also the North Sea, which is sensitive to the predicted sea level rise and which can affect large parts of the Flemish coastal area. Due to the expected increase in flood risks in the 21st century, the Flemish government has changed its flood management strategy from a flood control approach to a risk-based approach. Instead of focusing on protection against a certain water level, the objective now is to assure protection against the consequences of a flood, while considering its probability. In the first part, attention is given to the reasoning and functioning of the risk-based approach. Recent improvements to the approach are discussed, as well as the GIS-implementation of the entire model. The functioning of the approach is subsequently demonstrated in two case studies. The second part of the paper discusses future challenges for the flood risk management in Flanders. The driving force behind these challenges is the European Directive on the assessment and management of flood risks, which entered into force in 2007. The Flemish implementation of the directive is discussed and situated in the European landscape. Finally, attention is given to the communication of flood risks to the general public, since the “availability” of flood risk management plans is among the requirements of the EU Floods Directive. 相似文献
73.
Metal uptake by young trees from dredged brackish sediment: limitations and possibilities for phytoextraction and phytostabilisation 总被引:8,自引:0,他引:8
Mertens J Vervaeke P De Schrijver A Luyssaert S 《The Science of the total environment》2004,326(1-3):209-215
Five tree species (Acer pseudoplatanus L., Alnus glutinosa L. Gaertn., Fraxinus excelsior L., Populus alba L. and Robinia pseudoacacia L.) were planted on a mound constructed of dredged sediment. The sediment originated from a brackish river mouth and was slightly polluted with heavy metals. This preliminary study evaluated the use of trees for site reclamation by means of phytoextraction of metals or phytostabilisation. Although the brackish nature of the sediment caused slight salt damage, overall survival of the planted trees was satisfactory. Robinia and white poplar had the highest growth rates. Ash, maple and alder had the highest survival rates (>90%) but showed stunted growth. Ash, alder, maple and Robinia contained normal concentrations of Cd, Cu, Pb and Zn in their foliage. As a consequence these species reduce the risk of metal dispersal and are therefore suitable species for phytostabilisation under the given conditions. White poplar accumulated high concentrations of Cd (8.0 mg kg(-1)) and Zn (465 mg kg(-1)) in its leaves and might therefore cause a risk of Cd and Zn input into the ecosystem because of autumn litter fall. This species is thus unsuitable for phytostabilisation. Despite elevated metal concentrations in the leaves, phytoextraction of heavy metals from the soil by harvesting stem and/or leaf biomass of white poplar would not be a realistic option because it will require an excessive amount of time to be effective. 相似文献
74.
We study the design of two-level experiments with N runs and n factors large enough to estimate the interaction model, which contains all the main effects and all the two-factor interactions. Yet, an effect hierarchy assumption suggests that main effect estimation should be given more prominence than the estimation of two-factor interactions. Orthogonal arrays (OAs) favor main effect estimation. However, complete enumeration becomes infeasible for cases relevant for practitioners. We develop a partial enumeration procedure for these cases and we establish upper bounds on the D-efficiency for the interaction model based on arrays that have not been generated by the partial enumeration. We also propose an optimal design procedure that favors main effect estimation. Designs created with this procedure have smaller D-efficiencies for the interaction model than D-optimal designs, but standard errors for the main effects in this model are improved. Generated OAs for 7–10 factors and 32–72 runs are smaller or have a higher D-efficiency than the smallest OAs from the literature. Designs obtained with the new optimal design procedure or strength-3 OAs (which have main effects that are not correlated with two-factor interactions) are recommended if main effects unbiased by possible two-factor interactions are of primary interest. D-optimal designs are recommended if interactions are of primary interest. Supplementary materials for this article are available online. 相似文献
75.
We discuss the potential benefits, requirements, and implementation challenges of a security-by-design approach in which an integrated development environment plugin assists software developers to write code that complies with secure coding guidelines. We discuss how such a plugin can enable a company's policy-setting security experts and developers to pass their knowledge on to each other more efficiently, and to let developers more effectively put that knowledge into practice. This is achieved by letting the team members develop customized rule sets that formalize coding guidelines and by letting the plugin check the compliance of code being written to those rule sets in real time, similar to an as-you-type spell checker. Upon detected violations, the plugin suggests options to quickly fix them and offers additional information for the developer. We share our experience with proof-of-concept designs and implementations rolled out in multiple companies, and present some future research and development directions. 相似文献
76.
Screw piles: construction methods, bearing capacity and numerical modeling. In the last years the technology of screw‐piling has been developed to overcome the shortcomings of the non‐displacement piles (e.g. bored or flight auger piles) that are always accompanied by soil disturbance, which affect negatively the pile bearing capacity. On the contrary, the screw piles can be considered more or less as full displacement piles, which are comparable with the conventional displacement driven piles. Due to vibration free installation and minimal noise disturbance of the environment they can be applied in urban areas, where driven piles are not appropriate. The full displacement screw piles offer meaningful advantages in terms of environmental engineering. This relatively new piling technology is used successfully as an efficient foundation system that fulfils both stability and serviceability requirements. The use of standard compact machines with high productivity simplifies site operations and contributes to economical performance of the system at the same time. A general overview of the pile system ‐installation techniques, process and bearing capacity – will be described and discussed. First numerical analyses, calibrated on pile load tests to check the validity of the numerical model were applied. 相似文献
77.
Pieter Samyn Ahmed Barhoum Thomas Öhlund Alain Dufresne 《Journal of Materials Science》2018,53(1):146-184
The introduction of nanoparticles (NPs) and nanostructured materials (NSMs) in papermaking originally emerged from the perspective of improving processing operations and reducing material consumption. However, a very broad range of nanomaterials (NMs) can be incorporated into the paper structure and allows creating paper products with novel properties. This review is of interdisciplinary nature, addressing the emerging area of nanotechnology in papermaking focusing on resources, chemical synthesis and processing, colloidal properties, and deposition methods. An overview of different NMs used in papermaking together with their intrinsic properties and a link to possible applications is presented from a chemical point of view. After a brief introduction on NMs classification and papermaking, their role as additives or pigments in the paper structure is described. The different compositions and morphologies of NMs and NSMs are included, based on wood components, inorganic, organic, carbon-based, and composite NPs. In a first approach, nanopaper substrates are made from fibrillary NPs, including cellulose-based or carbon-based NMs. In a second approach, the NPs can be added to a regular wood pulp as nanofillers or used in coating compositions as nanopigments. The most important processing steps for NMs in papermaking are illustrated including the internal filling of fiber lumen, LbL deposition or fiber wall modification, with important advances in the field on the in situ deposition of NPs on the paper fibers. Usually, the manufacture of products with advanced functionality is associated with complex processes and hazardous materials. A key to success is in understanding how the NMs, cellulose matrix, functional additives, and processes all interact to provide the intended paper functionality while reducing materials waste and keeping the processes simple and energy efficient. 相似文献
78.
Jochem Baselmans Stephen Yates Pascale Diener Pieter de Visser 《Journal of Low Temperature Physics》2012,167(3-4):360-366
The next generation of far infrared radiation detectors is aimed to reach photon noise limited performance in space based observatories such as SPICA and BLISS. These detectors operate at loading powers of the astronomical signal of a few Attowatt (10?18 W) or less, corresponding to a sensitivity expressed in noise equivalent power as low as $\mathrm{NEP} = 2\times10^{-20}\ \mbox{W}/\sqrt{\mathrm{Hz}}$ . We have developed a cryogenic test setup for microwave Kinetic Inductance Detectors (MKIDs) that aims to reach these ultra-low background levels. Stray light is stopped by using a box in a box design with a sample holder inside another closed box. Microwave signals for the MKID readout enters the outer box through custom made coax cable filters. The stray light loading per pixel is estimated to be less than 60×10?18 W during nominal operation, a number limited by the intrinsic sensitivity of the MKIDs used to validate the system. 相似文献
79.
Jerico Moeyersons Pieter‐Jan Maenhaut Filip Turck Bruno Volckaert 《International Journal of Network Management》2020,30(2)
Software‐defined networking (SDN) is a new network paradigm that is separating the data plane and the control plane of the network, making one or more centralized controllers to supervise the behaviour of the entire network. Different types of SDN controller software exist, and research dealing with the difficulties of consistently integrating these different controller types has mostly been declared future work. In this paper, the Domino framework is proposed, a pluggable SDN framework for managing heterogeneous SDN networks. In contrast to related work, the proposed framework allows research into SDN networks controlled by different types of SDN controllers attempting to standardize the northbound API of them. Domino implements a microservice plugin architecture where users can link different SDN networks to a processing algorithm. Such an algorithm allows for, eg, adapting the flows by building a pipeline using plugins that either invoke other SDN operations or generic data processing algorithms. The Domino framework is evaluated by implementing a proof‐of‐concept implementation, which is tested on the initial requirements. It achieves the modifiability and the interoperability with an average successful exchange ratio of 99.99%. The performance requirements are met for the frequently used commands with an average response time of 0.26 seconds, and the framework can handle at least 72 plugins simultaneously depending on the available amount of RAM. The proposed framework is evaluated by means of the implementation of a shortest path routing algorithm between heterogeneous SDN networks. 相似文献
80.
Most simulations of colloidal suspensions treat the solvent implicitly or as a continuum. However as particle size decreases to the nanometer scale, this approximation fails and one needs to treat the solvent explicitly. Due to the large number of smaller solvent particles, such simulations are computationally challenging. Additionally, as the ratio of nanoparticle size to solvent size increases, commonly-used molecular dynamics algorithms for neighbor finding and parallel communication become inefficient. Here we present modified algorithms that enable fast single processor performance and reasonable parallel scalability for mixtures with a wide range of particle size ratios. The methods developed are applicable for any system with widely varying force distance cutoffs, independent of particle sizes and independent of the interaction potential. As a demonstration of the new algorithm's effectiveness, we present results for the pair correlation function and diffusion constant for mixtures where colloidal particles interact via integrated potentials. In these systems, with nanoparticles 20 times larger than the surrounding solvent particles, our parallel molecular dynamics code runs more than 100 times faster using the new algorithms. 相似文献