首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2929篇
  免费   25篇
  国内免费   1篇
电工技术   7篇
综合类   1篇
化学工业   97篇
金属工艺   7篇
机械仪表   8篇
建筑科学   34篇
矿业工程   4篇
能源动力   14篇
轻工业   59篇
水利工程   6篇
石油天然气   7篇
无线电   75篇
一般工业技术   113篇
冶金工业   2386篇
原子能技术   10篇
自动化技术   127篇
  2023年   10篇
  2022年   9篇
  2021年   13篇
  2020年   7篇
  2019年   12篇
  2018年   21篇
  2017年   11篇
  2016年   7篇
  2015年   18篇
  2014年   14篇
  2013年   39篇
  2012年   18篇
  2011年   22篇
  2010年   26篇
  2009年   26篇
  2008年   23篇
  2007年   28篇
  2006年   20篇
  2005年   15篇
  2004年   18篇
  2003年   22篇
  2002年   15篇
  2001年   12篇
  2000年   11篇
  1999年   91篇
  1998年   684篇
  1997年   419篇
  1996年   253篇
  1995年   156篇
  1994年   137篇
  1993年   138篇
  1992年   40篇
  1991年   50篇
  1990年   44篇
  1989年   48篇
  1988年   38篇
  1987年   36篇
  1986年   27篇
  1985年   32篇
  1984年   9篇
  1983年   13篇
  1982年   20篇
  1981年   15篇
  1980年   25篇
  1979年   7篇
  1978年   16篇
  1977年   70篇
  1976年   120篇
  1974年   6篇
  1966年   6篇
排序方式: 共有2955条查询结果,搜索用时 15 毫秒
51.
Tests comparing image sets can play a critical role in PET research, providing a yes-no answer to the question "Are two image sets different?" The statistical goal is to determine how often observed differences would occur by chance alone. We examined randomization methods to provide several omnibus test for PET images and compared these tests with two currently used methods. In the first series of analyses, normally distributed image data were simulated fulfilling the requirements of standard statistical tests. These analyses generated power estimates and compared the various test statistics under optimal conditions. Varying whether the standard deviations were local or pooled estimates provided an assessment of a distinguishing feature between the SPM and Montreal methods. In a second series of analyses, we more closely simulated current PET acquisition and analysis techniques. Finally, PET images from normal subjects were used as an example of randomization. Randomization proved to be a highly flexible and powerful statistical procedure. Furthermore, the randomization test does not require extensive and unrealistic statistical assumptions made by standard procedures currently in use.  相似文献   
52.
53.
OBJECTIVES: We investigated whether context or different speech rates could improve older adult performance on identification of synthetically generated words. BACKGROUND: Synthetic speech systems can potentially improve the daily functioning of older adults. However, research must determine whether older adults can effectively implement current text-to-speech technologies, which few studies have examined. Older adults' sensory and cognitive declines may cause difficulties in identifying words in synthetic speech. METHODS: Ninety-six participants (young, middle-aged, and older adults) identified auditory monosyllabic words (half natural, half synthetic) presented in isolation or at the ends of sentences. Participants heard speech at either normal or slower rates. RESULTS: We found an interaction of age, context, and voice type and that slower speech rates worsened performance for all groups. Contrasts revealed that context reduced age differences, though only for natural speech. Hearing acuity was highly correlated with age and fully accounts for the interaction. CONCLUSIONS: Context improves performance for everyone in natural speech. However, whereas context improves performance for synthetic speech, it does not differentially reduce the age impairment for older adults. Slower speed generally impairs everyone's performance compared with the normal rate. APPLICATIONS: Systems using synthetic speech should avoid presenting words in isolation, and rich contextual support should be consistently adopted. Synthetic speech fidelity must be improved significantly before becoming truly useful for older adult populations.  相似文献   
54.
The MR studies of three histologically proven spinal neurilemmomas and neurofibromas were reviewed retrospectively. There were two benign neurilemmomas (schwannomas) and one neurofibroma. The common characteristic of these cases was a central low intensity focus ("dot") seen on postcontrast T1-weighted imaging. The low intensity foci corresponded histologically to a congeries of changes including edema, microcysts, foam cells, hyalinization of blood vessels, old hemorrhage, and dystrophic calcification.  相似文献   
55.
A healthy adolescent boy was treated on two occasions for an overdose of chlorpropamide (Diabinese). Glucose therapy alone was not sufficient to control the hypoglycemia, but the administration of glucose plus diazoxide raised the blood sugar to supranormal levels. A bolus of intravenous glucagon briefly raised the blood sugar level to within normal limits, increased the blood ketones but also augmented insulin secretion. An overdose of sulfonylurea may cause prolonged and fatal hypoglycemia. Rational therapy, both in diabetic and normal persons, is glucose plus an "insulin antagonist." The administration of diazoxide was effective in our patient, substantially reducing the plasma insulin level; this agent may be the "insulin-antagonist" of choice for use in sulfonylurea-induced hypoglycemia.  相似文献   
56.
Symbolic connectionism in natural language disambiguation   总被引:1,自引:0,他引:1  
Natural language understanding involves the simultaneous consideration of a large number of different sources of information. Traditional methods employed in language analysis have focused on developing powerful formalisms to represent syntactic or semantic structures along with rules for transforming language into these formalisms. However, they make use of only small subsets of knowledge. This article describes how to use the whole range of information through a neurosymbolic architecture which is a hybridization of a symbolic network and subsymbol vectors generated from a connectionist network. Besides initializing the symbolic network with prior knowledge, the subsymbol vectors are used to enhance the system's capability in disambiguation and provide flexibility in sentence understanding. The model captures a diversity of information including word associations, syntactic restrictions, case-role expectations, semantic rules and context. It attains highly interactive processing by representing knowledge in an associative network on which actual semantic inferences are performed. An integrated use of previously analyzed sentences in understanding is another important feature of our model. The model dynamically selects one hypothesis among multiple hypotheses. This notion is supported by three simulations which show the degree of disambiguation relies both on the amount of linguistic rules and the semantic-associative information available to support the inference processes in natural language understanding. Unlike many similar systems, our hybrid system is more sophisticated in tackling language disambiguation problems by using linguistic clues from disparate sources as well as modeling context effects into the sentence analysis. It is potentially more powerful than any systems relying on one processing paradigm  相似文献   
57.
Speculative multithreading (SpMT) promises to be an effective mechanism for parallelizing nonnumeric programs, which tend to have irregular and pointer-intensive data structures and complex flows of control. Proper thread formation is crucial for obtaining good speedup in an SpMT system. This paper presents a compiler framework for partitioning a sequential program into multiple threads for parallel execution in an SpMT system. This framework is very general and supports speculative threads, nonspeculative threads, loop-centric threads, and out-of-order thread spawning. It is therefore useful for compiling for a wide variety of SpMT architectures. For effective partitioning of programs, the compiler uses profiling, interprocedural pointer analysis, data dependence information, and control dependence information. The compiler is implemented on the SUIF-MachSUIF platform. A simulation-based evaluation of the generated threads shows that the use of nonspeculative threads and nonloop speculative threads provides a significant increase in speedup for nonnumeric programs.  相似文献   
58.
While computable general equilibrium (CGE) models are a well-established tool in economic analyses, it is often difficult to disentangle the effects of policies of interest from that of the assumptions made regarding the underlying calibration data and model parameters. To characterize the behavior of a CGE model of carbon output with respect to two of these assumptions, we perform a large-scale Monte Carlo experiment to examine its sensitivity to base year calibration data and elasticity of substitution parameters in the absence of a policy change. By examining a variety of output variables at different levels of economic and geographic aggregation, we assess how these forms of uncertainty impact the conclusions that can be drawn from the model simulations. We find greater sensitivity to uncertainty in the elasticity of substitution parameters than to uncertainty in the base-year data as the projection period increases. While many model simulations were conducted to generate large output samples, we find that few are required to capture the mean model response of the variables tested. However, characterizing standard errors and empirical probability distribution functions is not possible without a large number of simulations.  相似文献   
59.
This paper introduces DAFMAC (Decode And Forward MAC), a scalable opportunistic cooperative retransmission enhancement for the IEEE 802.11 MAC protocol which operates without the need for additional explicit control signalling. Distributed opportunistic retransmission algorithms rely on selecting a single suitable relay without direct arbitration between nodes. Simulations show that DAFMAC offers a significant improvement in fairness for both throughput and jitter, giving multiple parallel data flows a more equal opportunity to utilise the channel. DAFMAC cooperative retransmissions are shown to reduce node energy consumption for a given throughput. Further, the DAFMAC relay selection algorithm is shown to scale very well in terms of complexity and memory requirements in comparison to other cooperative retransmission schemes.  相似文献   
60.
A first order Markov model of program behaviour is developed from FORTRAN program instruction data. The program model is evaluated by using it to generate page references for input into a simple virtual memory operating system (VMOS) simulation model. The actual trace data are also used to drive the VMOS model. In both cases the fault probability is obtained for different replacement rules, memory sizes and page sizes. A comparison of fault probabilities is used to determine the effectiveness of the Markov program model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号