首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   790篇
  免费   47篇
  国内免费   1篇
电工技术   12篇
化学工业   178篇
金属工艺   7篇
机械仪表   14篇
建筑科学   43篇
能源动力   35篇
轻工业   90篇
水利工程   8篇
石油天然气   1篇
无线电   73篇
一般工业技术   127篇
冶金工业   33篇
原子能技术   4篇
自动化技术   213篇
  2024年   1篇
  2023年   9篇
  2022年   39篇
  2021年   31篇
  2020年   18篇
  2019年   28篇
  2018年   30篇
  2017年   18篇
  2016年   28篇
  2015年   28篇
  2014年   55篇
  2013年   67篇
  2012年   57篇
  2011年   59篇
  2010年   36篇
  2009年   51篇
  2008年   44篇
  2007年   43篇
  2006年   15篇
  2005年   20篇
  2004年   18篇
  2003年   23篇
  2002年   19篇
  2001年   7篇
  2000年   7篇
  1999年   7篇
  1998年   6篇
  1997年   6篇
  1996年   8篇
  1995年   4篇
  1994年   7篇
  1993年   5篇
  1992年   3篇
  1991年   1篇
  1989年   1篇
  1987年   3篇
  1986年   5篇
  1985年   4篇
  1984年   2篇
  1983年   1篇
  1982年   4篇
  1981年   6篇
  1980年   4篇
  1979年   1篇
  1977年   2篇
  1975年   2篇
  1974年   1篇
  1973年   1篇
  1970年   1篇
  1968年   2篇
排序方式: 共有838条查询结果,搜索用时 14 毫秒
1.
Principal component regression (PCR), partial least squares (PLS), StepWise ordinary least squares regression (OLS), and back‐propagation artificial neural network (BP‐ANN) are applied here for the determination of the propylene concentration of a set of 83 production samples of ethylene–propylene copolymers from their infrared spectra. The set of available samples was split into (a) a training set, for models calculation; (b) a test set, for selecting the correct number of latent variables in PCR and PLS and the end point of the training phase of BP‐ANN; (c) a production set, for evaluating the predictive ability of the models. The predictive ability of the models is thus evaluated by genuine predictions. The model obtained by StepWise OLS turned out to be the best one, both in fitting and prediction. The study of the breakdown number of samples to be included in the training set showed that at least 52 experiments are necessary to build a reliable and predictive calibration model. It can be concluded that FTIR spectroscopy and OLS can be properly employed for monitoring the synthesis or the final product of ethylene–propylene copolymers, by predicting the concentration of propylene directly along the process line. © 2008 Wiley Periodicals, Inc. J Appl Polym Sci, 2008  相似文献   
2.
A rate 1/n binary generic convolutional encoder is a shift-register circuit where the inputs are information bits and the outputs are blocks of n bits generated as linear combinations on the appropriate shift register contents. The decoding of the outputs of a convolutional encoder can be carried out by the well-known Viterbi algorithm. The communication pattern of the Viterbi Algorithm is given as a graph, called trellis, associated to the state diagram of the corresponding encoder. In this paper we present a methodology that permits the efficient mapping of the Viterbi algorithm onto a column of an arbitrary number of processors. This is done through the representation of the data flow by using mathematical operators which present an inmediate hardware projection. A single operator string has been obtained to represent a generic encoder through the study of the data flow of free-forward encoders and feed-back encoders. The formal model developed is employed for the partitioning of the computations among an arbitrary number of processors in such a way that the data are recirculated opimizing the use of the processors and the communications. As a result, we obtain a highly regular and modular architecture suitable for VLSI implementation.  相似文献   
3.
The kinetics of alcoholic fermentation of a strain of Zymomonas mobilis, isolated from sugarcane juice, has been studied with the objective of determining the constansts of a non-structured mathematical model that represents the fermentation process. Assays in batch and in continuous culture have been carried out with different initial concentrations of glucose. The final concentrations of glucose, ethanol and biomass were determined. The following kinetic parameters were obtained: μmax, 0·5 h?1; Ks, 4·64 g dm?3; Pmax, 106 g dm?3; Yx/s, 0·0265 g g?1; m, 1·4 g g?1 h?1; α, 17·38 g g?1; β, 0·69 g g?1 h?1.  相似文献   
4.
This paper aims to establish appropriate guidelines for the mineral content of potable water produced by desalination. The inadequacy of such specifications has lead to a number of conflicting interpretations of contract documents or to a product which was not optimal from an economic or consumer point of view. An optimal choice under both these constraints can only be made through the evaluation and comparison of feasible alternatives of plant and mineral combinations.The optimal ranges of TDS and the most suitable ionic content of water produced by desalination are indicated in terms of the available general criteria and standards. The composition of the raw product water from a desalination plant depends on the type of plant, and in some plants (e.g. RO plants) it is difficult to achieve an initial composition which can be remineralized to produce optimal blends. The possible courses of action to make these waters more pleasant to drink are discussed and their economic ramifications are explored.To complement the conclusions reached above, a series of taste tests was carried out. Different compositions of remineralized distilled water (i.e. with acidification by CO2 and addition of CaCO3, with other salts to produce hardness, and with different quantities of seawater) were compared with tap water and mineral water by a sample of about 200 people in Sydney, Australia. The testers' reactions are analysed and related to the optimal remineralizations discussed above.  相似文献   
5.

Due to the increase and complexity of computer systems, reducing the overhead of fault tolerance techniques has become important in recent years. One technique in fault tolerance is checkpointing, which saves a snapshot with the information that has been computed up to a specific moment, suspending the execution of the application, consuming I/O resources and network bandwidth. Characterizing the files that are generated when performing the checkpoint of a parallel application is useful to determine the resources consumed and their impact on the I/O system. It is also important to characterize the application that performs checkpoints, and one of these characteristics is whether the application does I/O. In this paper, we present a model of checkpoint behavior for parallel applications that performs I/O; this depends on the application and on other factors such as the number of processes, the mapping of processes and the type of I/O used. These characteristics will also influence scalability, the resources consumed and their impact on the IO system. Our model describes the behavior of the checkpoint size based on the characteristics of the system and the type (or model) of I/O used, such as the number I/O aggregator processes, the buffering size utilized by the two-phase I/O optimization technique and components of collective file I/O operations. The BT benchmark and FLASH I/O are analyzed under different configurations of aggregator processes and buffer size to explain our approach. The model can be useful when selecting what type of checkpoint configuration is more appropriate according to the applications’ characteristics and resources available. Thus, the user will be able to know how much storage space the checkpoint consumes and how much the application consumes, in order to establish policies that help improve the distribution of resources.

  相似文献   
6.

Context

In recent years, many usability evaluation methods (UEMs) have been employed to evaluate Web applications. However, many of these applications still do not meet most customers’ usability expectations and many companies have folded as a result of not considering Web usability issues. No studies currently exist with regard to either the use of usability evaluation methods for the Web or the benefits they bring.

Objective

The objective of this paper is to summarize the current knowledge that is available as regards the usability evaluation methods (UEMs) that have been employed to evaluate Web applications over the last 14 years.

Method

A systematic mapping study was performed to assess the UEMs that have been used by researchers to evaluate Web applications and their relation to the Web development process. Systematic mapping studies are useful for categorizing and summarizing the existing information concerning a research question in an unbiased manner.

Results

The results show that around 39% of the papers reviewed reported the use of evaluation methods that had been specifically crafted for the Web. The results also show that the type of method most widely used was that of User Testing. The results identify several research gaps, such as the fact that around 90% of the studies applied evaluations during the implementation phase of the Web application development, which is the most costly phase in which to perform changes. A list of the UEMs that were found is also provided in order to guide novice usability practitioners.

Conclusions

From an initial set of 2703 papers, a total of 206 research papers were selected for the mapping study. The results obtained allowed us to reach conclusions concerning the state-of-the-art of UEMs for evaluating Web applications. This allowed us to identify several research gaps, which subsequently provided us with a framework in which new research activities can be more appropriately positioned, and from which useful information for novice usability practitioners can be extracted.  相似文献   
7.
Duchenne muscular dystrophy (DMD) is a rare genetic disease leading to progressive muscle wasting, respiratory failure, and cardiomyopathy. Although muscle fibrosis represents a DMD hallmark, the organisation of the extracellular matrix and the molecular changes in its turnover are still not fully understood. To define the architectural changes over time in muscle fibrosis, we used an mdx mouse model of DMD and analysed collagen and glycosaminoglycans/proteoglycans content in skeletal muscle sections at different time points during disease progression and in comparison with age-matched controls. Collagen significantly increased particularly in the diaphragm, quadriceps, and gastrocnemius in adult mdx, with fibrosis significantly correlating with muscle degeneration. We also analysed collagen turnover pathways underlying fibrosis development in cultured primary quadriceps-derived fibroblasts. Collagen secretion and matrix metalloproteinases (MMPs) remained unaffected in both young and adult mdx compared to wt fibroblasts, whereas collagen cross-linking and tissue inhibitors of MMP (TIMP) expression significantly increased. We conclude that, in the DMD model we used, fibrosis mostly affects diaphragm and quadriceps with a higher collagen cross-linking and inhibition of MMPs that contribute differently to progressive collagen accumulation during fibrotic remodelling. This study offers a comprehensive histological and molecular characterisation of DMD-associated muscle fibrosis; it may thus provide new targets for tailored therapeutic interventions.  相似文献   
8.
Background: Clinical diagnosis of Alzheimer’s disease (AD) increasingly incorporates CSF biomarkers. However, due to the intrinsic variability of the immunodetection techniques used to measure these biomarkers, establishing in-house cutoffs defining the positivity/negativity of CSF biomarkers is recommended. However, the cutoffs currently published are usually reported by using cross-sectional datasets, not providing evidence about its intrinsic prognostic value when applied to real-world memory clinic cases. Methods: We quantified CSF Aβ1-42, Aβ1-40, t-Tau, and p181Tau with standard INNOTEST® ELISA and Lumipulse G® chemiluminescence enzyme immunoassay (CLEIA) performed on the automated Lumipulse G600II. Determination of cutoffs included patients clinically diagnosed with probable Alzheimer’s disease (AD, n = 37) and subjective cognitive decline subjects (SCD, n = 45), cognitively stable for 3 years and with no evidence of brain amyloidosis in 18F-Florbetaben-labeled positron emission tomography (FBB-PET). To compare both methods, a subset of samples for Aβ1-42 (n = 519), t-Tau (n = 399), p181Tau (n = 77), and Aβ1-40 (n = 44) was analyzed. Kappa agreement of single biomarkers and Aβ1-42/Aβ1-40 was evaluated in an independent group of mild cognitive impairment (MCI) and dementia patients (n = 68). Next, established cutoffs were applied to a large real-world cohort of MCI subjects with follow-up data available (n = 647). Results: Cutoff values of Aβ1-42 and t-Tau were higher for CLEIA than for ELISA and similar for p181Tau. Spearman coefficients ranged between 0.81 for Aβ1-40 and 0.96 for p181TAU. Passing–Bablok analysis showed a systematic and proportional difference for all biomarkers but only systematic for Aβ1-40. Bland–Altman analysis showed an average difference between methods in favor of CLEIA. Kappa agreement for single biomarkers was good but lower for the Aβ1-42/Aβ1-40 ratio. Using the calculated cutoffs, we were able to stratify MCI subjects into four AT(N) categories. Kaplan–Meier analyses of AT(N) categories demonstrated gradual and differential dementia conversion rates (p = 9.815−27). Multivariate Cox proportional hazard models corroborated these findings, demonstrating that the proposed AT(N) classifier has prognostic value. AT(N) categories are only modestly influenced by other known factors associated with disease progression. Conclusions: We established CLEIA and ELISA internal cutoffs to discriminate AD patients from amyloid-negative SCD individuals. The results obtained by both methods are not interchangeable but show good agreement. CLEIA is a good and faster alternative to manual ELISA for providing AT(N) classification of our patients. AT(N) categories have an impact on disease progression. AT(N) classifiers increase the certainty of the MCI prognosis, which can be instrumental in managing real-world MCI subjects.  相似文献   
9.
In pretreatment tumor samples of EGFR-mutated non-small cell lung cancer (NSCLC) patients, EGFR-Thr790Met mutation has been detected in a variable prevalence by different ultrasensitive assays with controversial prognostic value. Furthermore, its detection in liquid biopsy (LB) samples remains challenging, being hampered by the shortage of circulating tumor DNA (ctDNA). Here, we describe the technical validation and clinical implications of a real-time PCR with peptide nucleic acid (PNA-Clamp) and digital droplet PCR (ddPCR) for EGFR-Thr790Met detection in diagnosis FFPE samples and in LB. Limit of blank (LOB) and limit of detection (LOD) were established by analyzing negative and low variant allele frequency (VAF) FFPE and LB specimens. In a cohort of 78 FFPE samples, both techniques showed an overall agreement (OA) of 94.20%. EGFR-Thr790Met was detected in 26.47% of cases and was associated with better progression-free survival (PFS) (16.83 ± 7.76 vs. 11.47 ± 1.83 months; p = 0.047). In LB, ddPCR was implemented in routine diagnostics under UNE-EN ISO 15189:2013 accreditation, increasing the detection rate of 32.43% by conventional methods up to 45.95%. During follow-up, ddPCR detected EGFR-Thr790Met up to 7 months before radiological progression. Extensively validated ultrasensitive assays might decipher the utility of pretreatment EGFR-Thr790Met and improve its detection rate in LB studies, even anticipating radiological progression.  相似文献   
10.
Wireless Networks - This paper presents a structural equation model that relates knowledge coordination with access to information in the process of implementing Six Sigma and their impact on the...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号