全文获取类型
收费全文 | 225篇 |
免费 | 8篇 |
国内免费 | 3篇 |
专业分类
电工技术 | 4篇 |
化学工业 | 46篇 |
机械仪表 | 2篇 |
建筑科学 | 5篇 |
能源动力 | 6篇 |
轻工业 | 11篇 |
水利工程 | 2篇 |
石油天然气 | 2篇 |
无线电 | 29篇 |
一般工业技术 | 25篇 |
冶金工业 | 27篇 |
自动化技术 | 77篇 |
出版年
2023年 | 2篇 |
2022年 | 7篇 |
2021年 | 5篇 |
2020年 | 3篇 |
2019年 | 4篇 |
2018年 | 4篇 |
2017年 | 5篇 |
2016年 | 10篇 |
2015年 | 9篇 |
2014年 | 7篇 |
2013年 | 14篇 |
2012年 | 9篇 |
2011年 | 9篇 |
2010年 | 4篇 |
2009年 | 16篇 |
2008年 | 14篇 |
2007年 | 12篇 |
2006年 | 9篇 |
2005年 | 11篇 |
2004年 | 7篇 |
2003年 | 8篇 |
2002年 | 7篇 |
2001年 | 4篇 |
2000年 | 4篇 |
1999年 | 5篇 |
1998年 | 9篇 |
1997年 | 10篇 |
1996年 | 4篇 |
1995年 | 3篇 |
1994年 | 2篇 |
1993年 | 4篇 |
1992年 | 2篇 |
1991年 | 4篇 |
1989年 | 2篇 |
1988年 | 1篇 |
1987年 | 1篇 |
1983年 | 1篇 |
1982年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
1971年 | 1篇 |
排序方式: 共有236条查询结果,搜索用时 15 毫秒
1.
Fractionation of partly hydrolysed polyvinyl acetate (PVA) was performed by warming of its aqueous solutions. The following properties of the obtained fractions were determined: viscosity, molecular weight and molecular weight distribution, surface tension, and absorbance in the IR range. The blockiness of the polymer molecules, characterized by their behaviour towards iodine-containing systems such as I2,-H3BO3 and I2,-KI, was estimated. Fractionation of the aqueous solutions of PVA by warming is based mainly on the different internal molecular structure of the separated products, i.e. on the length of the vinyl acetate blocks in the PVA molecules and, to a lesser extent, on the degree of hydrolysis and the degree of polymerization. The more blocklike are the PVA molecules, the less compatible are the polymers in the PVA-hydroxypropyl methylcellulose (HPMC)-water system. At phase separation in this sytem the PVA molecules which are not compatible with HPMC are, in the first place, those of the highest blockiness. 相似文献
2.
The Meteor Automatic Metric for Machine Translation evaluation, originally developed and released in 2004, was designed with the explicit goal of producing sentence-level scores which correlate well with human judgments of translation quality. Several key design decisions were incorporated into Meteor in support of this goal. In contrast with IBM’s Bleu, which uses only precision-based features, Meteor uses and emphasizes recall in addition to precision, a property that has been confirmed by several metrics as being critical for high correlation with human judgments. Meteor also addresses the problem of reference translation variability by utilizing flexible word matching, allowing for morphological variants and synonyms to be taken into account as legitimate correspondences. Furthermore, the feature ingredients within Meteor are parameterized, allowing for the tuning of the metric’s free parameters in search of values that result in optimal correlation with human judgments. Optimal parameters can be separately tuned for different types of human judgments and for different languages. We discuss the initial design of the Meteor metric, subsequent improvements, and performance in several independent evaluations in recent years. 相似文献
3.
This paper investigates how the vision of the Semantic Web can be carried over to the realm of email. We introduce a general notion of semantic email, in which an email message consists of a structured query or update coupled with corresponding explanatory text. Semantic email opens the door to a wide range of automated, email-mediated applications with formally guaranteed properties. In particular, this paper introduces a broad class of semantic email processes. For example, consider the process of sending an email to a program committee, asking who will attend the PC dinner, automatically collecting the responses, and tallying them up. We define both logical and decision-theoretic models where an email process is modeled as a set of updates to a data set on which we specify goals via certain constraints or utilities. We then describe a set of inference problems that arise while trying to satisfy these goals and analyze their computational tractability. In particular, we show that for the logical model it is possible to automatically infer which email responses are acceptable w.r.t. a set of constraints in polynomial time, and for the decision-theoretic model it is possible to compute the optimal message-handling policy in polynomial time. In addition, we show how to automatically generate explanations for a process's actions, and identify cases where such explanations can be generated in polynomial time. Finally, we discuss our publicly available implementation of semantic email and outline research challenges in this realm.1 相似文献
4.
This paper presents a technique for characterizing the statistical properties and spectrum of power supply noise using only two on-chip low-throughput samplers. The samplers utilize a voltage-controlled oscillator to perform high-resolution analog-to-digital conversion with minimal hardware. The measurement system is implemented in a 0.13-/spl mu/m process along with a high-speed link transceiver. Measured results from this chip validate the accuracy of the measurement system and elucidate several aspects of power supply noise, including its cyclostationary nature. 相似文献
5.
Base station placement has significant impact on sensor network performance. Despite its significance, results on this problem
remain limited, particularly theoretical results that can provide performance guarantee. This paper proposes a set of procedure
to design (1− ε) approximation algorithms for base station placement problems under any desired small error bound ε > 0. It
offers a general framework to transform infinite search space to a finite-element search space with performance guarantee.
We apply this procedure to solve two practical problems. In the first problem where the objective is to maximize network lifetime,
an approximation algorithm designed through this procedure offers 1/ε2 complexity reduction when compared to a state-of-the-art algorithm. This represents the best known result to this problem.
In the second problem, we apply the design procedure to address base station placement problem when the optimization objective
is to maximize network capacity. Our (1− ε) approximation algorithm is the first theoretical result on this problem.
Yi Shi received his B.S. degree from University of
Science and Technology of China, Hefei, China, in 1998, a M.S. degree from Institute of Software, Chinese Academy of Science,
Beijing, China, in 2001, and a second M.S. degree from Virginia Tech, Blacksburg, VA, in 2003, all in computer science. He
is currently working toward his Ph.D. degree in electrical and computer engineering at Virginia Tech. While in undergraduate,
he was a recipient of Meritorious Award in International Mathematical Contest in Modeling and 1997 and 1998, respectively.
His current research focuses on algorithms and optimizations for wireless sensor networks, wireless ad hoc networks, UWB-based
networks, and SDR-based networks. His work has appeared in journals and highly selective international conferences (ACM Mobicom, ACM Mobihoc, and IEEE Infocom).
Y. Thomas Hou received the B.E. degree from the City College of New York in 1991, the M.S. degree from Columbia University in 1993, and
the Ph.D. degree from Polytechnic University, Brooklyn, New York, in 1998, all in Electrical Engineering.
Since Fall 2002, he has been an Assistant Professor at Virginia Tech, the Bradley Department of Electrical and Computer Engineering,
Blacksburg, VA. His current research interests are radio resource (spectrum) management and networking for software-defined
radio wireless networks, optimization and algorithm design for wireless ad hoc and sensor networks, and video communications
over dynamic ad hoc networks. From 1997 to 2002, Dr. Hou was a Researcher at Fujitsu Laboratories of America, Sunnyvale, CA,
where he worked on scalable architectures, protocols, and implementations for differentiated services Internet, service overlay
networking, video streaming, and network bandwidth allocation policies and distributed flow control algorithms.
Prof. Hou is a recipient of an Office of Naval Research (ONR) Young Investigator Award (2003) and a National Science Foundation
(NSF) CAREER Award (2004). He is a Co-Chair of Technical Program Committee of the Second International Conference on Cognitive
Radio Oriented Wireless Networks and Communications (CROWNCOM 2007), Orlando, FL, August 1–3, 2007. He also was the Chair
of the First IEEE Workshop on Networking Technologies for Software Defined Radio Networks, September 25, 2006, Reston, VA.
Prof. Hou holds two U.S. patents and has three more pending.
Alon Efrat earned his Bachelor in Applied Mathematics from the Technion (Israel’s Institute of Technology) in 1991, his Master in Computer
Science from the Technion in 1993, and his Ph.D in Computer Science from Tel-Aviv University in 1998. During 1998–2000 he
was a Post Doctorate Research Associate at the Computer Science Department of Stanford University, and at IBM Almaden Research
Center. Since 2000, he is an assistant professor at the Computer Science Department of the University of Arizona. His main
research areas are Computational Geometry, and its applications to sensor networks and medical imaging. 相似文献
6.
Repeated communication and Ramsey graphs 总被引:2,自引:0,他引:2
Alon N. Orlitsky A. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1995,41(5):1276-1289
We study the savings afforded by repeated use in two zero-error communication problems. We show that for some random sources, communicating one instance requires arbitrarily many bits, but communicating multiple instances requires roughly 1 bit per instance. We also exhibit sources where the number of bits required for a single instance is comparable to the source's size, but two instances require only a logarithmic number of additional bits. We relate this problem to that of communicating information over a channel. Known results imply that some channels can communicate exponentially more bits in two uses than they can in one use 相似文献
7.
A Secure Function Evaluation (SFE) of a two-variable function f(·,·) is a protocol that allows two parties with inputs x and
y to evaluate f(x,y) in a manner where neither party learns "more than is necessary". A rich body of work deals with the
study of completeness for secure two-party computation. A function f is complete for SFE if a protocol for securely evaluating
f allows the secure evaluation of all (efficiently computable) functions. The questions investigated are which functions are
complete for SFE, which functions have SFE protocols unconditionally and whether there are functions that are neither complete
nor have efficient SFE protocols. The previous study of these questions was mainly conducted from an information theoretic
point of view and provided strong answers in the form of combinatorial properties. However, we show that there are major differences
between the information theoretic and computational settings. In particular, we show functions that are considered as having
SFE unconditionally by the combinatorial criteria but are actually complete in the computational setting. We initiate the
fully computational study of these fundamental questions. Somewhat surprisingly, we manage to provide an almost full characterization
of the complete functions in this model as well. More precisely, we present a computational criterion (called computational
row non-transitivity) for a function f to be complete for the asymmetric
case. Furthermore, we show a matching criterion called computational row transitivity for f to have a simple SFE (based on
no additional assumptions). This criterion is close to the negation of the computational row non-transitivity and thus we
essentially characterize all "nice" functions as either complete or having SFE unconditionally. 相似文献
8.
Avigail Landman Shabtai Hadash Gennady E. Shter Alon Ben-Azaria Hen Dotan Avner Rothschild Gideon S. Grader 《Advanced functional materials》2021,31(14):2008118
Decoupled water splitting is a promising new path for renewable hydrogen production, offering many potential advantages such as stable operation under partial-load conditions, high-pressure hydrogen production, overall system robustness, and higher safety levels. Here, the performance of electrospun core/shell nickel/nickel hydroxide anodes is demonstrated in an electrochemical-thermally activated chemical decoupled water splitting process. The high surface area of the hierarchical porous electrode structure improves the utilization efficiency, charge capacity, and current density of the redox anode while maintaining high process efficiency. The anodes reach average current densities as high as 113 mA cm−2 at a working potential of 1.48 VRHE and 64 mA cm−2 at 1.43 VRHE, with a Faradaic efficiency of nearly 100% and no H2/O2 intermixing in a membrane-free cell. 相似文献
9.
Naffziger S. Stackhouse B. Grutkowski T. Josephson D. Desai J. Alon E. Horowitz M. 《Solid-State Circuits, IEEE Journal of》2006,41(1):197-209
The design of the high end server processor code named Montecito incorporated several ambitious goals requiring innovation. The most obvious being the incorporation of two legacy cores on-die and at the same time reducing power by 23%. This is an effective 325% increase in MIPS per watt which necessitated a holistic focus on power reduction and management. The next challenge in the implementation was to ensure robust and high frequency circuit operation in the 90-nm process generation which brings with it higher leakage and greater variability. Achieving this goal required new methodologies for design, a greatly improved and tunable clock system and a better understanding of our power grid behavior all of which required new circuits and capabilities. The final aspect of circuit design improvement involved the I/O design for our legacy multi-drop system bus. To properly feed the two high frequency cores with memory bandwidth we needed to ensure frequency headroom in the operation of the bus. This was achieved through several innovations in controllability and tuning of the I/O buffers which are discussed as well. 相似文献
10.
Behavioral neuroscience underwent a technology-driven revolution with the emergence of machine-vision and machine-learning technologies. These technological advances facilitated the generation of high-resolution, high-throughput capture and analysis of complex behaviors. Therefore, behavioral neuroscience is becoming a data-rich field. While behavioral researchers use advanced computational tools to analyze the resulting datasets, the search for robust and standardized analysis tools is still ongoing. At the same time, the field of genomics exploded with a plethora of technologies which enabled the generation of massive datasets. This growth of genomics data drove the emergence of powerful computational approaches to analyze these data. Here, we discuss the composition of a large behavioral dataset, and the differences and similarities between behavioral and genomics data. We then give examples of genomics-related tools that might be of use for behavioral analysis and discuss concepts that might emerge when considering the two fields together. 相似文献