首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9006篇
  免费   338篇
  国内免费   39篇
电工技术   121篇
综合类   22篇
化学工业   2104篇
金属工艺   170篇
机械仪表   232篇
建筑科学   573篇
矿业工程   31篇
能源动力   171篇
轻工业   775篇
水利工程   54篇
石油天然气   11篇
武器工业   1篇
无线电   591篇
一般工业技术   1449篇
冶金工业   1529篇
原子能技术   70篇
自动化技术   1479篇
  2023年   58篇
  2022年   120篇
  2021年   143篇
  2020年   105篇
  2019年   120篇
  2018年   168篇
  2017年   111篇
  2016年   191篇
  2015年   191篇
  2014年   250篇
  2013年   568篇
  2012年   400篇
  2011年   546篇
  2010年   414篇
  2009年   392篇
  2008年   435篇
  2007年   423篇
  2006年   363篇
  2005年   312篇
  2004年   293篇
  2003年   221篇
  2002年   228篇
  2001年   173篇
  2000年   165篇
  1999年   157篇
  1998年   258篇
  1997年   195篇
  1996年   159篇
  1995年   140篇
  1994年   123篇
  1993年   136篇
  1992年   130篇
  1991年   75篇
  1990年   92篇
  1989年   79篇
  1988年   94篇
  1987年   66篇
  1986年   54篇
  1985年   79篇
  1984年   94篇
  1983年   60篇
  1982年   66篇
  1981年   83篇
  1980年   68篇
  1979年   85篇
  1978年   68篇
  1977年   68篇
  1976年   56篇
  1975年   58篇
  1973年   47篇
排序方式: 共有9383条查询结果,搜索用时 31 毫秒
171.
This paper proposes a scenario-based two-stage stochastic programming model with recourse for master production scheduling under demand uncertainty. We integrate the model into a hierarchical production planning and control system that is common in industrial practice. To reduce the problem of the disaggregation of the master production schedule, we use a relatively low aggregation level (compared to other work on stochastic programming for production planning). Consequently, we must consider many more scenarios to model demand uncertainty. Additionally, we modify standard modelling approaches for stochastic programming because they lead to the occurrence of many infeasible problems due to rolling planning horizons and interdependencies between master production scheduling and successive planning levels. To evaluate the performance of the proposed models, we generate a customer order arrival process, execute production planning in a rolling horizon environment and simulate the realisation of the planning results. In our experiments, the tardiness of customer orders can be nearly eliminated by the use of the proposed stochastic programming model at the cost of increasing inventory levels and using additional capacity.  相似文献   
172.
173.
This article studied the effects of low-velocity impact on the failure stresses and stiffness using a pendulum test. The specimens were of variable depth (20, 30, and 40 mm), a width of 50 mm, length of 650 mm, and span-length of 480 mm. The smallest specimen depth was similar to specimen sizes tested in the literature used to create the duration of load curve, while the largest specimen depth are considered structural size specimens. The impact was predicted using a numerical approach with Euler–Bernoulli beam, as well as Timoshenko beam theory, with a plastic contact law. The models were validated for impact from a low release-angle (where the beam remained elastic), but could use improvement for the force prediction at a high incidence velocity. The measured force signals were used as forcing functions to obtain the dynamic failure stresses for all of the evaluated specimens, and the Timoshenko–Goens–Hearmon Method to derive the dynamic E. The resulting strain rates ranged from 9.11?×?10?5 s?1 for the quasi-static specimens up to 25 s?1 for the greatest incidence velocity. The results from this study suggest different duration of load factors than the Madison Curve, influencing the design of structures subjected to dynamic loading.  相似文献   
174.
BACKGROUND: Calculations on the basis of the LQ-model have been focussed on the possible radiobiological equivalence between common continuous low dose rate irradiation (CLDR) and a superfractionated irradiation (PDR = pulsed dose rate) provided that the same total dose will be prescribed in the same overall time as with the low doserate. A clinically usable fractionation scheme for brachytherapy was recommended by Brenner and Hall and should replace the classical CLDR brachytherapy with line sources with an afterloading technique using a stepping source. The hypothes is that LDR equivalency can be achieved by superfractionation was tested by means of in vitro experiments on V79 cells in monolayer and spheroid cultures as well as on HeLa monolayers. MATERIALS AND METHODS: Simulating the clinical situation in PDR brachytherapy, fractionation experiments were carried out in the dose rate gradient of afterloading sources. Different dose levels were produced with the same number of fractions in the same overall incubation time. The fractionation schedules which were to be compared with a CLDR reference curve were: 40 x 0.47 Gy, 20 x 0.94 Gy, 10 x 1.88 Gy, 5 x 3.76 Gy, 2 x 9.4 Gy given in a period of 20 h and 1 x 18.8 Gy as a "single dose" exposition. As measured by flow cytometry, the influence of the dose rate in the pulse on cell survival and on cell cycle distribution under superfractionation was examined on V79 cells. RESULTS: V79 spheroids as a model for a slowly growing tumor, reacted according to the radiobiological calculations, as a CLDR equivalency was achieved with increasing fractionation. Rapidly growing V79 monolayer cells showed an inverse fractionation effect. A superfractionated irradiation with pulses of 0.94 Gy/h respectively 0.47 Gy/0.5 h was significantly more effective than the CLDR irradiation. This inverse fractionation effect in log-phase V79 cells could be attributed to the accumulation of cycling cells in the radiosensitive G2/M phase (G2 block) during protected exposure which was drastically more pronounced for the pulsed scheme. HeLa cells were rather insensitive to changes of fractionation. Superfractionation as well as hypofractionation yielded CLDR equivalent survival curves. CONCLUSIONS: The fractionation scheme, derived from the PDR theory to achieve CLDR equivalent effects, is valid for many cell lines, however not for all. Proliferation and dose rate dependend cell cycle effects modify predictions derived from the sublethal damage recovery model and can influence acute irradiation effects significantly. Dose rate sensitivity and rapid proliferation favour cell cycle effects and substantiate, applied to the clinical situation, the possibility of a higher effectiveness of the pulsed irradiation on rapidly growing tumors.  相似文献   
175.
PURPOSE: The effect of glaucoma surgery can simply be measured in terms of intraocular pressure (IOP) reduction. However, IOP reduction is not the final goal of glaucoma surgery, but rather long-term visual field preservation. Visual field preservation, on the other hand, can only be judged many years after surgery. Therefore, a formula that could be used to define the success of a pressure-lowering operation soon after surgery should take account of IOP reduction, but the correlation between "successful IOP reduction" and "long-term visual field preservation" should be as high as possible. METHODS: In an attempt to find such a formula, we examined the long-term course with reference to both IOP and visual field in 108 patients (mean follow-up 7.9 years). Several criteria were tested for their ability to predict the long-term visual field preservation. RESULT: The best correlation was obtained by a combined criterion specifying both an absolute upper limit and a relative IOP decrease (i.e. IOP postoperatively < 0.8*IOP preoperatively and IOP postoperatively < 21 mmHg). CONCLUSION: We recommend use of this criterion whenever the effect of a pressure-lowering operation has to be estimated shortly after surgery.  相似文献   
176.
This paper presents an assumption/commitment specification technique and a refinement calculus for networks of agents communicating asynchronously via unbounded FIFO channels in the tradition of Kahn.
  • We define two types of assumption/commitment specifications, namely simple and general specifications.
  • It is shown that semantically, any deterministic agent can be uniquely characterized by a simple specification, and any nondeterministic agent can be uniquely characterized by a general specification.
  • We define two sets of refinement rules, one for simple specifications and one for general specifications. The rules are Hoare-logic inspired. In particular the feedback rules employ invariants in the style of a traditional while-rule.
  • Both sets of rules have been proved to be sound and also (semantic) relative complete.
  • Conversion rules allow the two logics to be combined. This means that general specifications and the rules for general specifications have to be introduced only at the point in a system development where they are really needed.
  •   相似文献   
    177.
    This study addresses the need to reduce the risk of clogging when preparing samples for cell concentration, i.e., the CaSki Cell-lines (epidermoid cervical carcinoma cells). Aiming to develop a non-clogging microconcentrator, we proposed a new counter-flow concentration unit characterized by the directions of penetrating flows being at an obtuse angle to the main flow, due to employment of streamlined turbine blade-like micropillars. Based on the optimization results of the counter-flow unit profile, a fractal arrangement for the counter-flow concentration unit was developed. A counter-flow microconcentrator chip was then designed and fabricated, with both the processing layer and collecting layer arranged in terms of the honeycomb structure. Visualized experiments using CaSki cell samples on the microconcentrator chip demonstrated that no cell-clogging phenomena occurred during the test and that no cells were found in the final filtrate. The test results show an excellent concentration performance for the microconcentrator chip, while a concentrating ratio of >4 with the flow rate being below 1.0 ml/min. As only geometrical structure is employed in the passive device, the counter-flow microconcentrator can be easily integrated into advanced microfluidic systems. Owing to the merit of non-clogging and continuous processing ability, the counter-flow microconcentrator is not only suitable for the sample preparation within biomedical field, but also applicable in water-particle separation.  相似文献   
    178.
    We are extremely pleased to present this special issue of the Journal of Control Theory and Applications.Approximate dynamic programming (ADP) is a general and effective approach for solving optimal control and estimation problems by adapting to uncertain environments over time.ADP optimizes the sensing objectives accrued over a future time interval with respect to an adaptive control law,conditioned on prior knowledge of the system,its state,and uncertainties.A numerical search over the present value of the control minimizes a Hamilton-Jacobi-Bellman (HJB) equation providing a basis for real-time,approximate optimal control.  相似文献   
    179.
    In the era of the nanometer CMOS technology, due to stringent system requirements in power, performance and other fundamental physical limitations (such as mechanical reliability, thermal constraints, overall system form factor, etc.), future VLSI systems are relying more on ultra-high data rates (up to 100 Gbps/pin or 20 Tbps aggregate), scalable, re-configurable, highly compact and reliable interconnect fabric. To overcome such challenges, we first explore the use of multiband RF/wireless-interconnects wh...  相似文献   
    180.
    Code injection attacks are one of the most powerful and important classes of attacks on software. In these attacks, the attacker sends malicious input to a software application, where it is stored in memory. The malicious input is chosen in such a way that its representation in memory is also a valid representation of a machine code program that performs actions chosen by the attacker. The attacker then triggers a bug in the application to divert the control flow to this injected machine code. A typical action of the injected code is to launch a command interpreter shell, and hence the malicious input is often called shellcode. Attacks are usually performed against network facing applications, and such applications often perform validations or encodings on input. Hence, a typical hurdle for attackers, is that the shellcode has to pass one or more filtering methods before it is stored in the vulnerable application??s memory space. Clearly, for a code injection attack to succeed, the malicious input must survive such validations and transformations. Alphanumeric input (consisting only of letters and digits) is typically very robust for this purpose: it passes most filters and is untouched by most transformations. This paper studies the power of alphanumeric shellcode on the ARM architecture. It shows that the subset of ARM machine code programs that (when interpreted as data) consist only of alphanumerical characters is a Turing complete subset. This is a non-trivial result, as the number of instructions that consist only of alphanumeric characters is very limited. To craft useful exploit code (and to achieve Turing completeness), several tricks are needed, including the use of self-modifying code.  相似文献   
    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号