全文获取类型
收费全文 | 217篇 |
免费 | 3篇 |
专业分类
电工技术 | 3篇 |
化学工业 | 37篇 |
建筑科学 | 3篇 |
轻工业 | 1篇 |
无线电 | 27篇 |
一般工业技术 | 34篇 |
冶金工业 | 19篇 |
自动化技术 | 96篇 |
出版年
2024年 | 4篇 |
2023年 | 3篇 |
2022年 | 3篇 |
2021年 | 12篇 |
2020年 | 2篇 |
2019年 | 4篇 |
2018年 | 4篇 |
2017年 | 1篇 |
2016年 | 5篇 |
2015年 | 8篇 |
2014年 | 2篇 |
2013年 | 8篇 |
2012年 | 14篇 |
2011年 | 8篇 |
2010年 | 9篇 |
2009年 | 21篇 |
2008年 | 13篇 |
2007年 | 13篇 |
2006年 | 7篇 |
2005年 | 2篇 |
2004年 | 9篇 |
2003年 | 3篇 |
2002年 | 7篇 |
2001年 | 3篇 |
2000年 | 5篇 |
1999年 | 5篇 |
1998年 | 3篇 |
1997年 | 4篇 |
1996年 | 4篇 |
1995年 | 3篇 |
1992年 | 1篇 |
1991年 | 3篇 |
1989年 | 1篇 |
1988年 | 1篇 |
1987年 | 2篇 |
1986年 | 2篇 |
1985年 | 2篇 |
1979年 | 3篇 |
1978年 | 2篇 |
1977年 | 6篇 |
1976年 | 2篇 |
1975年 | 5篇 |
1973年 | 1篇 |
排序方式: 共有220条查询结果,搜索用时 15 毫秒
101.
This paper considers a family of spatially discrete approximations, including boundary treatment, to initial boundary value problems in evolving bounded domains. The presented method is based on the Cartesian grid embedded Finite-Difference method, which was initially introduced by Abarbanel and Ditkowski (ICASE Report No. 96-8, 1996; and J. Comput. Phys. 133(2), 1997) and Ditkowski (Ph.D. thesis, Tel Aviv University, 1997), for initial boundary value problems on constant irregular domains. We perform a comprehensive theoretical analysis of the numerical issues, which arise when dealing with domains, whose boundaries evolve smoothly in the spatial domain as a function of time. In this class of problems the moving boundaries are impenetrable with either Dirichlet or Neumann boundary conditions, and should not be confused with the class of moving interface problems such as multiple phase flow, solidification, and the Stefan problem. Unlike other similar works on this class of problems, the resulting method is not restricted to domains of up to 3-D, can achieve higher than 2nd-order accuracy both in time and space, and is strictly stable in semi-discrete settings. The strict stability property of the method also implies, that the numerical solution remains consistent and valid for a long integration time. A complete convergence analysis is carried in semi-discrete settings, including a detailed analysis for the implementation of the diffusion equation. Numerical solutions of the diffusion equation, using the method for a 2nd and a 4th-order of accuracy are carried out in one dimension and two dimensions respectively, which demonstrates the efficacy of the method. This research was supported by the Israel Science Foundation (grant No. 1362/04). 相似文献
102.
Shlomi Dolev Yuval Elovici Rami Puzis Polina Zilberman 《Information Processing Letters》2009,109(20):1172-1176
In many applications we are required to increase the deployment of a distributed monitoring system on an evolving network. In this paper we present a new method for finding candidate locations for additional deployment in the network. This method is based on the Group Betweenness Centrality (GBC) measure that is used to estimate the influence of a group of nodes over the information flow in the network. The new method assists in finding the location of k additional monitors in the evolving network, such that the portion of additional traffic covered is at least (1−1/e) of the optimal. 相似文献
103.
Robert Moskovitch Dima Stopel Clint Feher Nir Nissim Nathalie Japkowicz Yuval Elovici 《Journal in Computer Virology》2009,5(4):295-308
The recent growth in network usage has motivated the creation of new malicious code for various purposes. Today’s signature-based
antiviruses are very accurate for known malicious code, but can not detect new malicious code. Recently, classification algorithms
were used successfully for the detection of unknown malicious code. But, these studies involved a test collection with a limited
size and the same malicious: benign file ratio in both the training and test sets, a situation which does not reflect real-life
conditions. We present a methodology for the detection of unknown malicious code, which examines concepts from text categorization,
based on n-grams extraction from the binary code and feature selection. We performed an extensive evaluation, consisting of a test collection
of more than 30,000 files, in which we investigated the class imbalance problem. In real-life scenarios, the malicious file
content is expected to be low, about 10% of the total files. For practical purposes, it is unclear as to what the corresponding
percentage in the training set should be. Our results indicate that greater than 95% accuracy can be achieved through the
use of a training set that has a malicious file content of less than 33.3%. 相似文献
104.
This research proposes a novel automatic method (termed Auto-Sign) for extracting unique signatures of malware executables
to be used by high-speed malware filtering devices based on deep-packet inspection and operating in real-time. Contrary to
extant string and token-based signature generation methods, we implemented Auto-Sign an automatic signature generation method that can be used on large-size malware by disregarding signature candidates which
appear in benign executables. Results from experimental evaluation of the proposed method suggest that picking a collection
of executables which closely represents commonly used code, plays a key role in achieving highly specific signatures which
yield low false positives. 相似文献
105.
The September 11, 2001 (9/11), terrorist attacks were unprecedented in their magnitude and aftermath. In the wake of the attacks, researchers reported a wide range of mental and physical health outcomes, with posttraumatic stress disorder (PTSD) the one most commonly studied. In this review, we aim to assess the evidence about PTSD among highly exposed populations in the first 10 years after the 9/11 attacks. We performed a systematic review. Eligible studies included original reports based on the full Diagnostic and Statistical Manual of Mental Disorders (4th ed., rev.; American Psychiatric Association, 2000) criteria of PTSD among highly exposed populations such as those living or working within close proximity to the World Trade Center (WTC) and the Pentagon in New York City and Washington, DC, respectively, and first responders, including rescue, cleaning, and recovery workers. The large body of research conducted after the 9/11 attacks in the past decade suggests that the burden of PTSD among persons with high exposure to 9/11 was substantial. PTSD that was 9/11-related was associated with a wide range of correlates, including sociodemographic and background factors, event exposure characteristics, loss of life of significant others, and social support factors. Few studies used longitudinal study design or clinical assessments, and no studies reported findings beyond six years post-9/11, thus hindering documentation of the long-term course of confirmed PTSD. Future directions for research are discussed. (PsycINFO Database Record (c) 2011 APA, all rights reserved) 相似文献
106.
107.
Asaf Shabtai Yuval Fledel Yuval Elovici Yuval Shahar 《Journal in Computer Virology》2010,6(3):239-259
In this study, we propose a new approach for detecting previously unencountered instances of known classes of malicious software based on their temporal behavior. In the proposed approach, time-stamped security data are continuously monitored within the target computer system or network and then processed by the knowledge-based temporal abstraction (KBTA) methodology. Using KBTA, continuously measured data (e.g., the number of running processes) and events (e.g., installation of a software) are integrated with a security-domain, temporal-abstraction knowledge-base (i.e., a security ontology for abstracting meaningful patterns from raw, time-oriented security data), to create higher-level, time-oriented concepts and patterns, also known as temporal abstractions. Automatically-generated temporal abstractions can be monitored to detect suspicious temporal patterns. These patterns are compatible with a set of predefined classes of malware as defined by a security expert employing a set of time and value constraints. The new approach was applied for detecting worm-related malware using two different ontologies. Evaluation results demonstrated the effectiveness of the new approach. The approach can be used for detecting other types of malware by updating the security ontology with new definitions of temporal patterns. 相似文献
108.
Shubha Chakravarty Yuval Shahar 《Annals of Mathematics and Artificial Intelligence》2000,30(1-4):3-22
We use a constraint-based language to specify repeating temporal patterns. The Constraint-based Pattern Specification Language (CAPSUL) is simple to use, but allows a wide variety of patterns to be expressed. This paper describes in detail the syntax of CAPSUL, including its layers of abstraction and four types of constraints. We also discuss the semantics of CAPSUL, including the concept of interference between patterns and the expressive power of the language. We have implemented CAPSUL in a temporal-abstraction system called Résumé, and used it in a graphical knowledge-acquisition tool to acquire domain-specific knowledge from experts about patterns to be found in large databases. We summarize the results of preliminary experiments using the pattern-specification and pattern-detection tools on data about patients who have cancer and have been seen at the Rush Presbyterian/St. Luke's Medical Center. This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
109.
Inferring PoP level maps is gaining interest due to its importance to many areas, e.g., for tracking the Internet evolution and studying its properties. In this paper we introduce a novel structural approach to automatically generate large scale PoP level maps using traceroute measurement from multiple locations. The PoPs are first identified based on their structure, and are then assigned a location using information from several geo-location databases. We discuss the tradeoffs in this approach and provide extensive validation details. The generated maps can be widely used for research, and we provide some possible directions. 相似文献
110.
John H. Nguyen Yuval Shahar Samson W. Tu Amar K. Das Mark A. Musen 《Journal of Intelligent Information Systems》1999,13(1-2):121-145
The ability to reason with time-oriented data is central to the practice of medicine. Monitoring clinical variables over time often provides information that drives medical decision making (e.g., clinical diagnosis and therapy planning). Because the time-oriented patient data are often stored in electronic databases, it is important to ensure that clinicians and medical decision-support applications can conveniently find answers to their clinical queries using these databases. To help clinicians and decision-support applications make medical decisions using time-oriented data, a database-management system should (1) permit the expression of abstract, time-oriented queries, (2) permit the retrieval of data that satisfy a given set of time-oriented data-selection criteria, and (3) present the retrieved data at the appropriate level of abstraction. We impose these criteria to facilitate the expression of clinical queries and to reduce the manual data processing that users must undertake to decipher the answers to their queries. We describe a system, Tzolkin, that integrates a general method for temporal-data maintenance with a general method for temporal reasoning to meet these criteria. Tzolkin allows clinicians to use SQL-like temporal queries to retrieve both raw, time-oriented data and dynamically generated summaries of those data. Tzolkin can be used as a standalone system or as a module that serves other software systems. We implement Tzolkin with a temporal-database mediator approach. This approach is general, facilitates software reuse, and thus decreases the cost of building new software systems that require this functionality. 相似文献