首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12篇
  免费   0篇
电工技术   1篇
一般工业技术   1篇
冶金工业   2篇
自动化技术   8篇
  2018年   2篇
  2016年   1篇
  2014年   1篇
  2008年   2篇
  2006年   1篇
  2004年   1篇
  1998年   1篇
  1993年   1篇
  1991年   1篇
  1986年   1篇
排序方式: 共有12条查询结果,搜索用时 156 毫秒
1.
Formal verification has advanced to the point that developers can verify the correctness of small, critical modules. Unfortunately, despite considerable efforts, determining if a “verification” verifies what the author intends is still difficult. Previous approaches are difficult to understand and often limited in applicability. Developers need verification coverage in terms of the software they are verifying, not model checking diagnostics. We propose a methodology to allow developers to determine (and correct) what it is that they have verified, and tools to support that methodology. Our basic approach is based on a novel variation of mutation analysis and the idea of verification driven by falsification. We use the CBMC model checker to show that this approach is applicable not only to simple data structures and sorting routines, and verification of a routine in Mozilla’s JavaScript engine, but to understanding an ongoing effort to verify the Linux kernel read-copy-update mechanism. Moreover, we show that despite the probabilistic nature of random testing and the tendency to incompleteness of testing as opposed to verification, the same techniques, with suitable modifications, apply to automated test generation as well as to formal verification. In essence, it is the number of surviving mutants that drives the scalability of our methods, not the underlying method for detecting faults in a program. From the point of view of a Popperian analysis where an unkilled mutant is a weakness (in terms of its falsifiability) in a “scientific theory” of program behavior, it is only the number of weaknesses to be examined by a user that is important.  相似文献   
2.
International Journal on Software Tools for Technology Transfer - A test harness, in automated test generation, defines the set of valid tests for a system, as well as their correctness properties....  相似文献   
3.
From operating systems and web browsers to spacecraft, many software systems maintain a log of events that provides a partial history of execution, supporting post-mortem (or post-reboot) analysis. Unfortunately, bandwidth, storage limitations, and privacy concerns limit the information content of logs, making it difficult to fully reconstruct execution from these traces. This paper presents a technique for modifying a program such that it can produce exactly those executions consistent with a given (partial) trace of events, enabling efficient analysis of the reduced program. Our method requires no additional history variables to track log events, and it can slice away code that does not execute in a given trace. We describe initial experiences with implementing our ideas by extending the CBMC bounded model checker for C programs. Applying our technique to a small, 400-line file system written in C, we get more than three orders of magnitude improvement in running time over a naïve approach based on adding history variables, along with fifty- to eighty-fold reductions in the sizes of the SAT problems solved.  相似文献   
4.
Methionine dependence is a metabolic defect that occurs in many human tumor cell lines but not normal in unestablished cell strains. Methionine-dependent tumor cell lines are unable to proliferate and arrest in the late S/G2 phase of the cell cycle when methionine is replaced by its immediate precursor homocysteine in the culture medium (MET-HCY+ medium). However, it is not known whether methionine dependence occurs in fresh patient tumors as it does in cell lines. In order to determine whether methionine dependence occurs in fresh patient tumors as well as whether methionine dependence occurs in fresh patient tumors as well as in cell lines we took advantage of the technique of sponge-gel-supported histoculture to grow tumors directly from surgery. We then measured nuclear DNA content by image analysis to determine the cell cycle position in MET-HCY+ compared to MET+HCY- medium in 21 human patient tumors. Human tumor cell lines found to be methionine dependent by cell count were used as positive controls and were found to have marked reduction of cells in G1 compared to total cells in the cell cycle in MET-HCY+ medium with respect to the G1: total cell ratio in MET+HCY- medium. Therefore late cell cycle arrest was used as a marker of methionine dependence for histocultured patient tumors. We found that 5 human tumors of 21, including tumors of the colon, breast, ovary, prostate, and a melanoma, were methionine dependent based on cell cycle analysis. These data on fresh human tumors indicate that methionine dependence may frequently occur in the cancer patient population. Implications for potential therapy based on methionine dependence are discussed.  相似文献   
5.
Model checkers were originally developed to support the formal verification of high-level design models of distributed system designs. Over the years, they have become unmatched in precision and performance in this domain. Research in model checking has meanwhile moved towards methods that allow us to reason also about implementation level artifacts (e.g., software code) directly, instead of hand-crafted representations of those artifacts. This does not mean that there is no longer a place for the use of high-level models, but it does mean that such models are used in a different way today. In the approach that we describe here, high-level models are used to represent the environment for which the code is to be verified, but not the application itself. The code of the application is now executed as is by the model checker, while using powerful forms of abstraction on-the-fly to build the abstract state space that guides the verification process. This model-driven code checking method allows us to verify implementation level code efficiently for high-level safety and liveness properties. In this paper, we give an overview of the methodology that supports this new paradigm of code verification.  相似文献   
6.
There has been considerable progress in the domain of software verification over the last few years. This advancement has been driven, to a large extent, by the emergence of powerful yet automated abstraction techniques such as predicate abstraction. However, the state-space explosion problem in model checking remains the chief obstacle to the practical verification of real-world distributed systems. Even in the case of purely sequential programs, a crucial requirement to make predicate abstraction effective is to use as few predicates as possible. This is because, in the worst case, the state-space of the abstraction generated (and consequently the time and memory complexity of the abstraction process) is exponential in the number of predicates involved. In addition, for concurrent programs, the number of reachable states could grow exponentially with the number of components. We attempt to address these issues in the context of verifying concurrent (message-passing) C programs against safety specifications. More specifically, we present a fully automated compositional framework which combines two orthogonal abstraction techniques (predicate abstraction for data and action-guided abstraction for events) within a counterexample-guided abstraction refinement scheme. In this way, our algorithm incrementally increases the granularity of the abstractions until the specification is either established or refuted. Additionally, a key feature of our approach is that if a property can be proved to hold or not hold based on a given finite set of predicates $\mathcal{P}$ , the predicate refinement procedure we propose in this article finds automatically a minimal subset of $\mathcal{P}$ that is sufficient for the proof. This, along with our explicit use of compositionality, delays the onset of state-space explosion for as long as possible. We describe our approach in detail, and report on some very encouraging experimental results obtained with our tool MAGIC.  相似文献   
7.
What is a test case for? Sometimes, to expose a fault. Tests can also exercise code, use memory or time, or produce desired output. Given a desired effect, a test case can be seen as a cause, and its components divided into essential (required for effect) and accidental. Delta debugging is used for removing accidents from failing test cases, producing smaller test cases that are easier to understand. This paper extends delta debugging by simplifying test cases with respect to arbitrary effects, a generalization called cause reduction. Suites produced by cause reduction provide effective quick tests for real‐world programs. For Mozilla's JavaScript engine, the reduced suite is possibly more effective for finding faults. The effectiveness of reduction‐based suites persists through changes to the software, improving coverage by over 500 branches for versions up to 4 months later. Cause reduction has other applications, including improving seeded symbolic execution, where using reduced tests can often double the number of additional branches explored. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
8.
An adaptive load forecasting algorithm that was originally developed for the one-hour time period has been extensively enhanced and implemented. The major enhancement is the ability to forecast total system hourly load as far ahead as five days. The focus is on the enhancements and implementation of the algorithm. The purpose is to describe the final form of the forecasting model and the overall forecasting procedure; the procedure for utilizing minimum/maximum daily weather forecasts made by the US Weather Bureau: the offline forecasting accuracy based on four years of historical hourly load and weather data; and the implementation of the algorithm on a desktop computer  相似文献   
9.
10.
BACKGROUND: The objective of this work was to estimate the percentage of workers by industry that are exposed to defined concentrations of respirable crystalline silica dust. METHODS: An algorithm was used to estimate the percentage of total workers exposed to crystalline silica in 1993 at concentrations of at least 1, 2, 5, and 10 times the National Institute for Occupational Safety and Health (NIOSH) Recommended Exposure Limit (REL) of 0.05 mg/m3. Respirable crystalline silica air sampling data from regulatory compliance inspections performed by the Occupational Safety and Health Administration (OSHA), for the years 1979-1995, and recorded in the Integrated Management Information System (IMIS) were used to estimate exposures. Therefore, this work does not include industries such as mining and agriculture that are not covered by OSHA. The estimates are stratified by Standard Industrial Classification (SIC) codes. RESULTS: This work found that some of the highest respirable crystalline silica dust concentrations occurred in construction (masonry, heavy construction, and painting), iron and steel foundries (casting), and in metal services (sandblasting, grinding, or buffing of metal parts). It was found that 1.8% (13,800 workers) of the workers in SIC 174--Masonry, Stonework, Tile Setting, and Plastering--were exposed to at least 10 times the NIOSH REL. For SIC 162--Heavy Construction, Except Highway and Street Construction--this number is 1.3% (6,300 workers). SIC 172--Painting and Paper Hanging--which includes construction workers involved in sandblasting was found to have 1.9% (3,000 workers) exposed to at least 10 times the NIOSH REL. The industry that was found to have the highest percentage of workers (6%) exposed to at least the NIOSH REL was the cut stone and stone products industry. CONCLUSION: Not enough is being done to control exposure to respirable crystalline silica. Engineering controls should be instituted in the industries indicated by this work.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号