全文获取类型
收费全文 | 1246篇 |
免费 | 83篇 |
国内免费 | 31篇 |
专业分类
电工技术 | 12篇 |
综合类 | 134篇 |
化学工业 | 8篇 |
金属工艺 | 14篇 |
机械仪表 | 36篇 |
建筑科学 | 5篇 |
矿业工程 | 2篇 |
能源动力 | 7篇 |
轻工业 | 18篇 |
水利工程 | 1篇 |
石油天然气 | 3篇 |
武器工业 | 5篇 |
无线电 | 76篇 |
一般工业技术 | 75篇 |
冶金工业 | 12篇 |
自动化技术 | 952篇 |
出版年
2025年 | 7篇 |
2024年 | 20篇 |
2023年 | 26篇 |
2022年 | 21篇 |
2021年 | 17篇 |
2020年 | 16篇 |
2019年 | 25篇 |
2018年 | 16篇 |
2017年 | 35篇 |
2016年 | 30篇 |
2015年 | 42篇 |
2014年 | 40篇 |
2013年 | 66篇 |
2012年 | 84篇 |
2011年 | 78篇 |
2010年 | 48篇 |
2009年 | 84篇 |
2008年 | 73篇 |
2007年 | 57篇 |
2006年 | 68篇 |
2005年 | 43篇 |
2004年 | 52篇 |
2003年 | 61篇 |
2002年 | 45篇 |
2001年 | 28篇 |
2000年 | 28篇 |
1999年 | 18篇 |
1998年 | 22篇 |
1997年 | 23篇 |
1996年 | 20篇 |
1995年 | 22篇 |
1994年 | 17篇 |
1993年 | 30篇 |
1992年 | 14篇 |
1991年 | 13篇 |
1990年 | 7篇 |
1989年 | 9篇 |
1988年 | 7篇 |
1987年 | 5篇 |
1984年 | 4篇 |
1983年 | 5篇 |
1981年 | 2篇 |
1980年 | 4篇 |
1979年 | 6篇 |
1978年 | 3篇 |
1977年 | 3篇 |
1976年 | 4篇 |
1975年 | 2篇 |
1974年 | 2篇 |
1973年 | 2篇 |
排序方式: 共有1360条查询结果,搜索用时 15 毫秒
61.
In this paper, we consider the transport capacity of ad hoc networks with a random flat topology under the present support of an infinite capacity infrastructure network. Such a network architecture allows ad hoc nodes to communicate with each other by purely using the remaining ad hoc nodes as their relays. In addition, ad hoc nodes can also utilize the existing infrastructure fully or partially by reaching any access point (or gateway) of the infrastructure network in a single or multi-hop fashion. Using the same tools as in [9], we show that the per source node capacity of Θ(W/log(N)) can be achieved in a random network scenario with the following assumptions: (i) The number of ad hoc nodes per access point is bounded above, (ii) each wireless node, including the access points, is able to transmit at W bits/sec using a fixed transmission range, and (iii) N ad hoc nodes, excluding the access points, constitute a connected topology graph. This is a significant improvement over the capacity of random ad hoc networks with no infrastructure support which is found as
in [9]. We also show that even when less stringent requirements are imposed on topology connectivity, a per source node capacity figure that is arbitrarily close to Θ(1) cannot be obtained. Nevertheless, under these weak conditions, we can further improve per node throughput significantly. We also provide a limited extension on our results when the number of ad hoc nodes per access point is not bounded.Ulaş C. Kozat was born in 1975, in Adana, Turkey. He received his B.Sc. degree in Electrical and Electronics Engineering from Bilkent University, Ankara, Turkey and his M.Sc. in Electrical Engineering from The George Washington University, Washington D.C. in 1997 and 1999 respectively. He has received his Ph.D. degree in May 2004 from the Department of Electrical and Computer Engineering at University of Maryland, College Park. He has conducted research under the Institute for Systems Research (ISR) and Center for Hybrid and Satellite Networks (CSHCN) at the same university. He worked at HRL Laboratories and Telcordia Technologies Applied Research as a research intern. His current research interests primarily focus on wireless and hybrid networks that span multiple communication layers and networking technologies. Mathematical modelling, resource discovery and allocation, vertical integration of wireless systems and communication layers, performance analysis, architecture and protocol development are the main emphasis of his work. E-mail: kozat@isr.umd.eduLeandros Tassiulas (S′89, M′91) was born in 1965, in Katerini, Greece. He obtained the Diploma in Electrical Engineering from the Aristotelian University of Thessaloniki, Thessaloniki, Greece in 1987, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Maryland, College Park in 1989 and 1991 respectively.He is Professor in the Dept. of Computer and Telecommunications Engineering, University of Thessaly, Greece and Research Professor in the Dept. of Electrical and Computer Eng and the Institute for Systems Research, University of Maryland College Park since 2001. He has held positions as Assistant Professor at Polytechnic University New York (1991–95), Assistant and Associate Professor University of Maryland College Park (1995–2001) and Professor University of Ioannina Greece (1999–2001).His research interests are in the field of computer and communication networks with emphasis on fundamental mathematical models, architectures and protocols of wireless systems, sensor networks, high-speed internet and satellite communications.Dr. Tassiulas received a National Science Foundation (NSF) Research Initiation Award in 1992, an NSF CAREER Award in 1995 an Office of Naval Research, Young Investigator Award in 1997 and a Bodosaki Foundation award in 1999 and the INFOCOM′94 best paper award. E-mail: leandros@isr.umd.edu 相似文献
62.
A new parameterized-core-based design methodology targeted for programmable decoders for low-density parity-check (LDPC) codes is proposed. The methodology solves the two major drawbacks of excessive memory overhead and complex on-chip interconnect typical of existing decoder implementations which limit the scalability, degrade the error-correction capability, and restrict the domain of application of LDPC codes. Diverse memory and interconnect optimizations are performed at the code-design, decoding algorithm, decoder architecture, and physical layout levels, with the following features: (1) Architecture-aware (AA)-LDPC code design with embedded structural features that significantly reduce interconnect complexity, (2) faster and memory-efficient turbo-decoding algorithm for LDPC codes, (3) programmable architecture having distributed memory, parallel message processing units, and dynamic/scalable transport networks for routing messages, and (4) a parameterized macro-cell layout library implementing the main components of the architecture with scaling parameters that enable low-level transistor sizing and power-rail scaling for power-delay-area optimization. A 14.3 mm2 programmable decoder core for a rate-1/2, length 2048 AA-LDPC code generated using the proposed methodology is presented, which delivers a throughput of 6.4 Gbps at 125 MHz and consumes 787 mW of power.Mohammad M. Mansour received his B.E. degree with distinction in 1996 and his M.S. degree in 1998 all in Computer and Communications Engineering from the American University of Beirut (AUB). In August 2002, he received his M.S. degree in Mathematics from the University of Illinois at Urbana-Champaign (UIUC). Mohammad received his Ph.D. in Electrical Engineering in May 2003 from UIUC. He is currently an Assistant Professor of Electrical Engineering with the ECE department at AUB. From 1998 to 2003, he was a research assistant at the Coordinated Science Laboratory (CSL) at UIUC. In 1997 he was a research assistant at the ECE department at AUB, and in 1996 he was a teaching assistant at the same department. From 1992–1996 he was on the Deans honor list at AUB. He received the Harriri Foundation award twice in 1996 and 1998, the Charli S. Korban award twice in 1996 and 1998, the Makhzoumi Foundation Award in 1998, and the PHI Kappa PHI Honor Society awards in 2000 and 2001. During the summer of 2000, he worked at National Semiconductor Corp., San Francisco, CA, with the wireless research group. His research interests are VLSI architectures and integrated circuit (IC) design for communications and coding theory applications, digital signal processing systems and general purpose computing systems.Naresh R. Shanbhag received the B.Tech from the Indian Institute of Technology, New Delhi, India, in 1988, M.S. from Wright State University and Ph.D. degree from the University of Minnesota, in 1993, all in Electrical Engineering. From July 1993 to August 1995, he worked at AT&T Bell Laboratories at Murray Hill in the Wide-Area Networks Group, where he was responsible of development of VLSI algorithms, architectures and implementation for high-speed data communications applications. In particular, he was the lead chip architect for AT&Ts 51.84 Mb/s transceiver chips over twisted-pair wiring for Asynchronous Transfer Mode (ATM)-LAN and broadband access. Since August 1995, he is with the Department of Electrical and Computer Engineering, and the Coordinated Science Laboratory where he is presently an Associate Professor and Director of the Illinois Center for Integrated Microsystems. At University of Illinois, he founded the VLSI Information Processing Systems (ViPS) Group, whose charter is to explore issues related to low-power, high-performance, and reliable integrated circuit implementations of broadband communications and digital signal processing systems. He has published numerous journal articles/book chapters/conference publications in this area and holds three US patents. He is also a co-author of the research monograph Pipelined Adaptive Digital Filters (Norwell, MA: Kluwer, 1994). Dr. Shanbhag received the 2001 IEEE Transactions Best Paper Award, 1999 Xerox Faculty Research Award, 1999 IEEE Leon K. Kirchmayer Best Paper Award, the 1997 Distinguished Lecturer of IEEE Circuit and Systems Society (97–99), the National Science Foundation CAREER Award in 1996, and the 1994 Darlington Best Paper Award from the IEEE Circuits and Systems society. From 1997–99 and 2000–2002, he served as an Associate Editor for IEEE Transaction on Circuits and Systems: Part II and an Associate Editor for the IEEE Transactions on VLSI, respectively. He was the technical program chair for the 2002 IEEE Workshop on Signal Processing Systems (SiPS02). 相似文献
63.
64.
Anand M. Joglekar Raghu N. Kacker 《Quality and Reliability Engineering International》1989,5(2):113-123
Design of experiments is a quality technology to achieve product excellence, that is to achieve high quality at low cost. It is a tool to optimize product and process designs, to accelerate the development cycle, to reduce development costs, to improve the transition of products from R & D to manufacturing and to troubleshoot manufacturing problems effectively. It has been successfully, but sporadically, used in the United States. More recently, it has been identified as a major technological reason for the success of Japan in producing high-quality products at low cost. In the United States, the need for increased competitiveness and the emphasis on quality improvement demands a widespread use of design of experiments by engineers, scientists and quality professionals. In the past, such widespread use has been hampered by a lack of proper training and a lack of availability of tools to easily implement design of experiments in industry. Three steps are essential, and are being taken, to change this situation dramatically. First, simple graphical methods, to design and analyse experiments, need to be developed, particularly when the necessary microcomputer resources are not available. Secondly, engineers, scientists and quality professionals must have access to microcomputer-based software for design and analysis of experiments.1 Availability of such software would allow users to concentrate on the important scientific and engineering aspects of the problem by computerizing the necessary statistical expertise. Finally, since a majority of the current workforce is expected to be working in the year 2000, a massive training effort, based upon simple graphical methods and appropriate computer software, is necessary.2 The purpose of this paper is to describe a methodology based upon a new graphical method called interaction graphs and other previously known techniques, to simplify the correct design of practically important fractional factorial experiments. The essential problem in designing a fractional factorial experiment is first stated. The interaction graph for a 16-trial fractional factorial design is given to illustrate how the graphical procedure can be easily used to design a two-level fractional factorial experiment. Other previously known techniques are described to easily modify the two-level fractional factorial designs to create mixed multi-level designs. Interaction graphs for other practically useful fractional factorial designs are provided. A computer package called CADE (computer aided design of experiments), which automatically generates the appropriate fractional factorial designs based upon user specifications of factors, levels and interactions and conducts complete analyses of the designed experiments is briefly described.1 Finally, the graphical method is compared with other available methods for designing fractional factorial experiments. 相似文献
65.
BC graphs is an important class of hypercube-like interconnection networks. In this paper, some properties—vertex-pancyclicity, super-connectivity and the maximum number of edges joining vertices are studied. 相似文献
66.
Patrick Brézillon Juliette Brézillon 《Information Systems and E-Business Management》2008,6(3):279-293
Enterprises often embed decision-making processes in procedures in order to address issues in all cases. However, procedures often lead to sub-optimal solutions for any specific decision. As a consequence, each actor develops the practice of addressing decision making in a specific context. Actors contextualize decision making when enterprises are obliged to decontextualize decision making to limit the number of procedures and cover whole classes of decision-making processes by generalization. Practice modeling is not easy because there are as many practices as contexts of occurrence. This chapter proposes a way to deal effectively with practices. Based on a conceptual framework for dealing with context, we present a context-based representation formalism for modeling decision making and its realization by actors. This formalism is called contextual graphs and is discussed using the example of modeling car drivers’ behaviors. This article is part of the “Handbook on Decision Support Systems” edited by Frada Burstein and Clyde W. Holsapple (2008) Springer. 相似文献
67.
REN Xiaojun 《艺术与设计.数码设计》2008,(11)
辅助图形的设计是对标准图形在使用时的有益补充和延伸,目地是强化受众者对企业形象多角度、多层次的感受。在传递信息的读图时代,怎样在纷繁复杂中找到最佳的受众渠道,为企业形象添光彩,辅助图形的作用不可估量。 相似文献
68.
Vadim V. Lozin 《Optimization and Engineering》2008,9(2):201-211
We consider an optimization problem that arises in machine-tool design. It deals with optimization of the structure of gearbox, which is normally represented by a graph. The edges of such a graph correspond to pairs of gear-wheels and the vertices stand for velocities. There is a designated input vertex and a set of output vertices. The problem is to create a graph with given number of output vertices while minimizing the total number of vertices. We present an integer programming formulation of this problem and propose an efficient solution in the special case of regular graphs. The author gratefully acknowledges the support of DIMAP—the Center for Discrete Mathematics and its Applications at the University of Warwick. 相似文献
69.
Thomas E. Obremski 《技术计量学》2013,55(4):342-343
Fractional two-level factorial designs are often used in the early stages of an investigation to screen for important factors. Traditionally, 2 n-k fractional factorial designs of resolution III, IV, or V have been used for this purpose. When the investigator is able to specify the set of nonnegligible factorial effects, it is sometimes possible to obtain an orthogonal design with fewer runs than a standard textbook design by searching within a wider class of designs called parallel-flats designs. The run sizes in this class of designs do not necessarily need to be powers of 2. We discuss an algorithm for constructing orthogonal parallel-flats designs to meet user specifications. Several examples illustrate the use of the algorithm. 相似文献
70.
The reliability of a system is the probability that the system will perform its intended mission under given conditions. This
paper provides an overview of the approaches to reliability modelling and identifies their strengths and weaknesses. The models
discussed include structure models, simple stochastic models and decomposable stochastic models. Ignoring time-dependence,
structure models give reliability as a function of the topological structure of the system. Simple stochastic models make
direct use of the properties of underlying stochastic processes, while decomposable models consider more complex systems and
analyse them through subsystems. Petri nets and dataflow graphs facilitate the analysis of complex systems by providing a
convenient framework for reliability analysis. 相似文献