首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ContextSoftware quality is a complex concept. Therefore, assessing and predicting it is still challenging in practice as well as in research. Activity-based quality models break down this complex concept into concrete definitions, more precisely facts about the system, process, and environment as well as their impact on activities performed on and with the system. However, these models lack an operationalisation that would allow them to be used in assessment and prediction of quality. Bayesian networks have been shown to be a viable means for this task incorporating variables with uncertainty.ObjectiveThe qualitative knowledge contained in activity-based quality models are an abundant basis for building Bayesian networks for quality assessment. This paper describes a four-step approach for deriving systematically a Bayesian network from an assessment goal and a quality model.MethodThe four steps of the approach are explained in detail and with running examples. Furthermore, an initial evaluation is performed, in which data from NASA projects and an open source system is obtained. The approach is applied to this data and its applicability is analysed.ResultsThe approach is applicable to the data from the NASA projects and the open source system. However, the predictive results vary depending on the availability and quality of the data, especially the underlying general distributions.ConclusionThe approach is viable in a realistic context but needs further investigation in case studies in order to analyse its predictive validity.  相似文献   

2.
Identifying mislabeled training data with the aid of unlabeled data   总被引:1,自引:0,他引:1  
This paper presents a new approach for identifying and eliminating mislabeled training instances for supervised learning algorithms. The novelty of this approach lies in the using of unlabeled instances to aid the detection of mislabeled training instances. This is in contrast with existing methods which rely upon only the labeled training instances. Our approach is straightforward and can be applied to many existing noise detection methods with only marginal modifications on them as required. To assess the benefit of our approach, we choose two popular noise detection methods: majority filtering (MF) and consensus filtering (CF). MFAUD/CFAUD is the new proposed variant of MF/CF which relies on our approach and denotes majority/consensus filtering with the aid of unlabeled data. Empirical study validates the superiority of our approach and shows that MFAUD and CFAUD can significantly improve the performances of MF and CF under different noise ratios and labeled ratios. In addition, the improvement is more remarkable when the noise ratio is greater.  相似文献   

3.
ContextOne of the key requirements for the code is conformance with the architecture. Architectural drift implies the diverging of the implemented code from the architecture design of the system. Manually checking the consistency between the implemented code and architecture can be intractable and cumbersome for large-scale systems.ObjectiveThis article proposes a holistic, automated architecture drift analysis approach that explicitly focuses on the adoption of architecture views. The approach builds on, complements, and enhances existing architecture conformance analysis methods that do not adopt a holistic approach or fail to address the architecture viewpoints.MethodA model-driven development approach is adopted in which architecture views are represented as specifications of domain-specific languages. The code in its turn, is analyzed, and the architectural view specifications are reconstructed, which are then automatically checked with the corresponding architecture models.ResultsTo illustrate the approach, we have applied a systematic case study research for an architecture drift analysis of the business-to-customer (B2C) system within a large-scale software company.ConclusionThe case study research showed that divergences and absences of architectural elements could be detected in a cost-effective manner with the proposed approach.  相似文献   

4.
ContextState machine diagrams are a powerful means to describe the behavior of reactive systems. Unfortunately, the implementation of state machines is difficult, because state machine concepts, like states, events and transitions, are not directly supported in commonly used programming languages. Most of the implementation approaches known so far have one or more serious drawbacks: they are difficult to understand and maintain, lack in performance, depend on the properties of a specific programming language or do not implement the more advanced state machine features like hierarchy, concurrency or history.ObjectiveThis paper proposes and examines an approach to implement state machines, where both states and events are objects. Because the reaction of the state machine depends on two objects (state and event), a method known as double-dispatch is used to invoke the transition between the states. The aim of this work is to explore this approach in detail.MethodTo prove the usefulness of the proposed approach, an example was implemented with the proposed approach as well as with other commonly known approaches. The implementation strategies are then compared with each other with respect to run-time, code size, maintainability and portability.ResultsThe presented approach executes fast but needs slightly more memory than other approaches. It supports hierarchy, concurrency and history, is human authorable, easy to understand and easy to modify. Because of its pure object-oriented nature depending only on inheritance and late binding, it is extensible and can be implemented with a wide variety of programming languages.ConclusionThe results show that the presented approach is a useful way to implement state machines, even on small micro-controllers.  相似文献   

5.
6.
ContextNumerous software design patterns have been introduced and cataloged either as a canonical or a variant solution to solve a design problem. The existing automatic techniques for design pattern(s) selection aid novice software developers to select the more appropriate design pattern(s) from the list of applicable patterns to solve a design problem in the designing phase of software development life cycle.GoalHowever, the existing automatic techniques are limited to the semi-formal specification, multi-class problem, an adequate sample size to make precise learning and individual classifier training in order to determine a candidate design pattern class and suggest more appropriate pattern(s).MethodTo address these issues, we exploit a text categorization based approach via Fuzzy c-means (unsupervised learning technique) that targets to present a systematic way to group the similar design patterns and suggest the appropriate design pattern(s) to developers related to the specification of a given design problem. We also propose an evaluation model to assess the effectiveness of the proposed approach in the context of several real design problems and design pattern collections. Subsequently, we also propose a new feature selection method Ensemble-IG to overcome the multi-class problem and improve the classification performance of the proposed approach.ResultsThe promising experimental results suggest the applicability of the proposed approach in the domain of classification and selection of appropriate design patterns. Subsequently, we also observed the significant improvement in learning precision of the proposed approach through Ensemble-IG.ConclusionThe proposed approach has four advantages as compared to previous work. First, the semi-formal specification of design patterns is not required as a prerequisite; second, the ground reality of class label assignment is not mandatory; third, lack of classifier’s training for each design pattern class and fourth, an adequate sample size is not required to make precise learning.  相似文献   

7.
ContextThe paper addresses the use of a Software Product Line approach in the context of developing software for a high-integrity, regulated domain such as civil aerospace. The success of a Software Product Line approach must be judged on whether useful products can be developed more effectively (lower cost, reduced schedule) than with traditional single-system approaches. When developing products for regulated domains, the usefulness of the product is critically dependent on the ability of the development process to provide approval evidence for scrutiny by the regulating authority.ObjectiveThe objective of the work described is to propose a framework for arguing that a product instantiated using a Software Product Line approach can be approved and used within a regulated domain, such that the development cost of that product would be less than if it had been developed in isolation.MethodThe paper identifies and surveys the issues relating the adoption of Software Product Lines as currently understood (including related technologies such as feature modelling, component-based development and model transformation) when applied to high-integrity software development. We develop an argument framework using Goal Structuring Notation to structure the claims made and the evidence required to support the approval of an instantiated product in such domains. Any unsubstantiated claims or missing/sub-standard evidence is identified, and we propose potential approaches or pose research questions to help address this.ResultsThe paper provides an argument framework supporting the use of a Software Product Line approach within a high-integrity regulated domain. It shows how lifecycle evidence can be collected, managed and used to credibly support a regulatory approval process, and provides a detailed example showing how claims regarding model transformation may be supported. Any attempt to use a Software Product Line approach in a regulated domain will need to provide evidence to support their approach in accordance with the argument outlined in the paper.ConclusionProduct Line practices may complicate the generation of convincing evidence for approval of instantiated products, but it is possible to define a credible Trusted Product Line approach.  相似文献   

8.
ContextA considerable portion of the software systems today are adopted in the embedded control domain. Embedded control software deals with controlling a physical system, and as such models of physical characteristics become part of the embedded control software.ObjectiveDue to the evolution of system properties and increasing complexity, faults can be left undetected in these models of physical characteristics. Therefore, their accuracy must be verified at runtime. Traditional runtime verification techniques that are based on states/events in software execution are inadequate in this case. The behavior suggested by models of physical characteristics cannot be mapped to behavioral properties of software. Moreover, implementation in a general-purpose programming language makes these models hard to locate and verify. Therefore, this paper proposes a novel approach to perform runtime verification of models of physical characteristics in embedded control software.MethodThe development of an approach for runtime verification of models of physical characteristics and the application of the approach to two industrial case studies from the printing systems domain.ResultsThis paper presents a novel approach to specify models of physical characteristics using a domain-specific language, to define monitors that detect inconsistencies by exploiting redundancy in these models, and to realize these monitors using an aspect-oriented approach. We complement runtime verification with static analysis to verify the composition of domain-specific models with the control software written in a general-purpose language.ConclusionsThe presented approach enables runtime verification of implemented models of physical characteristics to detect inconsistencies in these models, as well as broken hardware components and wear and tear of hardware in the physical system. The application of declarative aspect-oriented techniques to realize runtime verification monitors increases modularity and provides the ability to statically verify this realization. The complementary static and runtime verification techniques increase the reliability of embedded control software.  相似文献   

9.
ContextAlong with expert judgment, analogy-based estimation, and algorithmic methods (such as Function point analysis and COCOMO), Least Squares Regression (LSR) has been one of the most commonly studied software effort estimation methods. However, an effort estimation model using LSR, a single LSR model, is highly affected by the data distribution. Specifically, if the data set is scattered and the data do not sit closely on the single LSR model line (do not closely map to a linear structure) then the model usually shows poor performance. In order to overcome this drawback of the LSR model, a data partitioning-based approach can be considered as one of the solutions to alleviate the effect of data distribution. Even though clustering-based approaches have been introduced, they still have potential problems to provide accurate and stable effort estimates.ObjectiveIn this paper, we propose a new data partitioning-based approach to achieve more accurate and stable effort estimates via LSR. This approach also provides an effort prediction interval that is useful to describe the uncertainty of the estimates.MethodEmpirical experiments are performed to evaluate the performance of the proposed approach by comparing with the basic LSR approach and clustering-based approaches, based on industrial data sets (two subsets of the ISBSG (Release 9) data set and one industrial data set collected from a banking institution).ResultsThe experimental results show that the proposed approach not only improves the accuracy of effort estimation more significantly than that of other approaches, but it also achieves robust and stable results according to the degree of data partitioning.ConclusionCompared with the other considered approaches, the proposed approach shows a superior performance by alleviating the effect of data distribution that is a major practical issue in software effort estimation.  相似文献   

10.
ContextSoftware Requirement Specifications (SRSs) are central to software lifecycles. An SRS defines the functionalities and constraints of a desired software system, hence it often serves as reference for further development. Software lifecycles concerned with the conversion of traditional systems into more service-oriented infrastructures can benefit from understanding potential shared capabilities through the analysis of SRSs.ObjectiveIn this paper, we propose an automated approach capable of recommending shared software services from multiple text-based SRSs created by different organizations. Our goal is to facilitate the identification of overlapping requirements in these specifications and subsequently recommend shared components, which promotes software reuse. The shared components can be implemented as services that are invoked across different systems.MethodOur approach leverages the syntactic similarity of the SRS text augmented with semantic information derived from the WordNet database. This work extends our earlier studies by introducing an algorithm that utilizes noun, verb, and predicate relations to enhance the discovery of equivalent requirements and the recommendation of reusable services. A prototype system is implemented to evaluate the approach and experimental results have shown effective recommendation of requirements and their realized shared services.ResultsOur automatic recommendation approach generates recommendations in few minutes compared to 9 h when services are manually inspected by developers. Our approach is also able to recommend services that are overlooked by the same developers, and to identify similarity between requirements even if these requirements are reworded.ConclusionWe show through experimentation that we can efficiently recommend services by leveraging both the syntactical structure and the semantic information of a requirements document and that our approach is more effective than the manual selection of services by experts. We also show that our approach is effective in detecting similar requirements for a single system and hence discovering opportunities for software reuse.  相似文献   

11.
ContextIn many organizational environments critical tasks exist which – in exceptional cases such as an emergency – must be performed by a subject although he/she is usually not authorized to perform these tasks. Break-glass policies have been introduced as a sophisticated exception handling mechanism to resolve such situations. They enable certain subjects to break or override the standard access control policies of an information system in a controlled manner.ObjectiveIn the context of business process modeling a number of approaches exist that allow for the formal specification and modeling of process-related access control concepts. However, corresponding support for break-glass policies is still missing. In this paper, we aim at specifying a break-glass extension for process-related role-based access control (RBAC) models.MethodWe use model-driven development (MDD) techniques to provide an integrated, tool-supported approach for the definition and enforcement of break-glass policies in process-aware information systems. In particular, we provide modeling support on the computation independent model (CIM) layer as well as on the platform independent model (PIM) and platform specific model (PSM) layers.ResultsOur approach is generic in the sense that it can be used to extend process-aware information systems or process modeling languages with support for process-related RBAC and corresponding break-glass policies. Based on the formal CIM layer metamodel, we present a UML extension on the PIM layer that allows for the integrated modeling of processes and process-related break-glass policies via extended UML Activity diagrams. We evaluated our approach in a case study on real-world processes. Moreover, we implemented our approach at the PSM layer as an extension to the BusinessActivity library and runtime engine.ConclusionOur integrated modeling approach for process-related break-glass policies allows for specifying break-glass rules in process-aware information systems.  相似文献   

12.
ContextSoftware clustering is a key technique that is used in reverse engineering to recover a high-level abstraction of the software in the case of limited resources. Very limited research has explicitly discussed the problem of finding the optimum set of clusters in the design and how to penalize for the formation of singleton clusters during clustering.ObjectiveThis paper attempts to enhance the existing agglomerative clustering algorithms by introducing a complementary mechanism. To solve the architecture recovery problem, the proposed approach focuses on minimizing redundant effort and penalizing for the formation of singleton clusters during clustering while maintaining the integrity of the results.MethodAn automated solution for cutting a dendrogram that is based on least-squares regression is presented in order to find the best cut level. A dendrogram is a tree diagram that shows the taxonomic relationships of clusters of software entities. Moreover, a factor to penalize clusters that will form singletons is introduced in this paper. Simulations were performed on two open-source projects. The proposed approach was compared against the exhaustive and highest gap dendrogram cutting methods, as well as two well-known cluster validity indices, namely, Dunn’s index and the Davies-Bouldin index.ResultsWhen comparing our clustering results against the original package diagram, our approach achieved an average accuracy rate of 90.07% from two simulations after the utility classes were removed. The utility classes in the source code affect the accuracy of the software clustering, owing to its omnipresent behavior. The proposed approach also successfully penalized the formation of singleton clusters during clustering.ConclusionThe evaluation indicates that the proposed approach can enhance the quality of the clustering results by guiding software maintainers through the cutting point selection process. The proposed approach can be used as a complementary mechanism to improve the effectiveness of existing clustering algorithms.  相似文献   

13.
ContextThe agile development paradigm has been extensively adopted in the industry. This adoption is highly dependent on the knowledge and good practices applied by most experienced practitioners in organizations. Hence, it would be valuable to count on appropriate support to preserve and systematically use this expert knowledge in configuring agile development processes aligned with organizational standards.ObjectiveThis paper presents a model-driven approach for representing and selecting good practices to configure agile practices in development processes aligned with organizational development practices and quality standards.MethodWe define a conceptual approach for configuring agile development processes that fulfills enterprise good practices and external quality standards. This approach was implemented in a tool suite and applied to an industrial development scenario related to ISO 9001 certification.ResultsThe approach was implemented in a model-driven tool that provides automatic support for identifying good practices when configuring agile development processes. The tool also verifies consistency with development methods and quality standards, such as ISO 9001.ConclusionsThe results obtained from the industrial application indicate that practitioners can reuse expert knowledge to configure agile development processes aligned with quality certifications. Moreover, the approach also facilitates the tailoring of agile practices into concrete development processes that take advantage of organizational good practices.  相似文献   

14.
ContextApplication of a refactoring operation creates a new set of dependency in the revised design as well as a new set of further refactoring candidates. In the studies of stepwise refactoring recommendation approaches, applying one refactoring at a time has been used, but is inefficient because the identification of the best candidate in each iteration of refactoring identification process is computation-intensive. Therefore, it is desirable to accurately identify multiple and independent candidates to enhance efficiency of refactoring process.ObjectiveWe propose an automated approach to identify multiple refactorings that can be applied simultaneously to maximize the maintainability improvement of software. Our approach can attain the same degree of maintainability enhancement as the method of the refactoring identification of the single best one, but in fewer iterations (lower computation cost).MethodThe concept of maximal independent set (MIS) enables us to identify multiple refactoring operations that can be applied simultaneously. Each MIS contains a group of refactoring candidates that neither affect (i.e., enable or disable) nor influence maintainability on each other. Refactoring effect delta table quantifies the degree of maintainability improvement each elementary candidate. For each iteration of the refactoring identification process, multiple refactorings that best improve maintainability are selected among sets of refactoring candidates (MISs).ResultsWe demonstrate the effectiveness and efficiency of the proposed approach by simulating the refactoring operations on several large-scale open source projects such as jEdit, Columba, and JGit. The results show that our proposed approach can improve maintainability by the same degree or to a better extent than the competing method, choosing one refactoring candidate at a time, in a significantly smaller number of iterations. Thus, applying multiple refactorings at a time is both effective and efficient.ConclusionOur proposed approach helps improve the maintainability as well as the productivity of refactoring identification.  相似文献   

15.
ContextAdaptive random testing (ART), originally proposed as an enhancement of random testing, is often criticized for the high computation overhead of many ART algorithms. Mirror ART (MART) is a novel approach that can be generally applied to improve the efficiency of various ART algorithms based on the combination of “divide-and-conquer” and “heuristic” strategies.ObjectiveThe computation overhead of the existing MART methods is actually on the same order of magnitude as that of the original ART algorithms. In this paper, we aim to further decrease the order of computation overhead for MART.MethodWe conjecture that the mirroring scheme in MART should be dynamic instead of static to deliver a higher efficiency. We thus propose a new approach, namely dynamic mirror ART (DMART), which incrementally partitions the input domain and adopts new mirror functions.ResultsOur simulations demonstrate that the new DMART approach delivers comparable failure-detection effectiveness as the original MART and ART algorithms while having much lower computation overhead. The experimental studies further show that the new approach also delivers a better and more reliable performance on programs with failure-unrelated parameters.ConclusionIn general, DMART is much more cost-effective than MART. Since its mirroring scheme is independent of concrete ART algorithms, DMART can be generally applied to improve the cost-effectiveness of various ART algorithms.  相似文献   

16.
ContextSince the emergence of the aspect oriented paradigm, several studies have been conducted to test the contribution of this new paradigm compared to the object paradigm. However, in addition to this type of studies, we need also comparative studies that assess the aspect approaches mutually. The motivations of the latter include the enhancement of each aspect approach, devising hybrid approaches or merely helping developers choosing the suitable approach according to their needs. Comparing advanced separation of concerns approaches is the context of our work.ObjectiveWe aim at making an assessment of how the aspect approaches deal with crosscutting concerns. This assessment is based on quantitative attributes such as coupling and cohesion that evaluate the modularity as well as on qualitative observations.MethodWe selected three of well-known aspect approaches: AspectJ, JBoss AOP and CaesarJ, all the three based on Java. We conducted then, a comparative study using the GoF design patterns. In order to be fair we asked a group of Master students to achieve the implementation of all patterns with the three approaches. The use of these implementations as hypothetical benchmarks allowed us to achieve two kinds of comparison: a quantitative one based on structural and performance metrics, and qualitative one based on observations collected during the implementation phase.ResultsThe quantitative comparison shows some advantages like the using of fewer components with AspectJ and the strong cohesion with CaesarJ and weaknesses, as the high internal coupling caused by the inner classes of CaesarJ. The qualitative comparison gives comments about the approach understandability and others qualitative concepts.ConclusionThis comparison highlighted strengths and weaknesses of each approach, and provided a referential work that can help choosing the right approach during software development, enhancing aspect approaches or devising hybrid approaches that combine best features.  相似文献   

17.
ContextFault localization is an important and expensive activity in software debugging. Previous studies indicated that statistically-based fault-localization techniques are effective in prioritizing the possible faulty statements with relatively low computational complexity, but prior works on statistical analysis have not fully investigated the behavior state information of each program element.ObjectiveThe objective of this paper is to propose an effective fault-localization approach based on the analysis of state dependence information between program elements.MethodIn this paper, state dependency is proposed to describe the control flow dependence between statements with particular states. A state dependency probabilistic model uses path profiles to analyze the state dependency information. Then, a fault-localization approach is proposed to locate faults by differentiating the state dependencies in passed and failed test cases.ResultsWe evaluated the fault-localization effectiveness of our approach based on the experiments on Siemens programs and four UNIX programs. Furthermore, we compared our approach with current state-of-art fault-localization methods such as SOBER, Tarantula, and CP. The experimental results show that, our approach can locate more faults than the other methods in every range on Siemens programs, and the overall efficiency of our approach in the range of 10–30% of analyzed source code is higher than the other methods on UNIX programs.ConclusionOur studies show that our approach consistently outperforms the other evaluated techniques in terms of effectiveness in fault localization on Siemens programs. Moreover, our approach is highly effective in fault localization even when very few test cases are available.  相似文献   

18.
ContextAn experiment-driven approach to software product and service development is gaining increasing attention as a way to channel limited resources to the efficient creation of customer value. In this approach, software capabilities are developed incrementally and validated in continuous experiments with stakeholders such as customers and users. The experiments provide factual feedback for guiding subsequent development.ObjectiveThis paper explores the state of the practice of experimentation in the software industry. It also identifies the key challenges and success factors that practitioners associate with the approach.MethodA qualitative survey based on semi-structured interviews and thematic coding analysis was conducted. Ten Finnish software development companies, represented by thirteen interviewees, participated in the study.ResultsThe study found that although the principles of continuous experimentation resonated with industry practitioners, the state of the practice is not yet mature. In particular, experimentation is rarely systematic and continuous. Key challenges relate to changing the organizational culture, accelerating the development cycle speed, and finding the right measures for customer value and product success. Success factors include a supportive organizational culture, deep customer and domain knowledge, and the availability of the relevant skills and tools to conduct experiments.ConclusionsIt is concluded that the major issues in moving towards continuous experimentation are on an organizational level; most significant technical challenges have been solved. An evolutionary approach is proposed as a way to transition towards experiment-driven development.  相似文献   

19.
ContextComputation Independent Model (CIM) as a business model describes the requirements and environment of a business system and instructs the designing and development; it is a key to influencing software success. Although many studies currently focus on model driven development (MDD); those researches, to a large extent, study the PIM-level and PSM-level model, and few have dealt with CIM-level modelling for case in which the requirements are unclear or incomplete.ObjectiveThis paper proposes a CIM-level modelling approach, which applies a stepwise refinement approach to modelling the CIM-level model starting from a high-level goal model to a lower-level business process model. A key advantage of our approach is the combination of the requirement model with the business model, which helps software engineers to define business models exactly for cases in which the requirements are unclear or incomplete.MethodThis paper, based on the model driven approach, proposes a set of models at the CIM-level and model transformations to connect these models. Accordingly, the formalisation approach of this paper involves formalising the goal model using the category theory and the scenario model and business process model using Petri nets.ResultsWe have defined a set of metamodels and transformation rules making it possible to obtain automatically a scenario model from the goal model and a business process model from the scenario model. At the same time, we have defined a mapping rule to formalise these models. Our proposed CIM modelling approach and formalisation approach are implemented with an MDA tool, and it has been empirically validated by a travel agency case study.ConclusionThis study shows how a CIM modelling approach helps to build a complete and consistent model at the CIM level for cases in which the requirements are unclear or incomplete in advance.  相似文献   

20.
ContextAs the use of Domain-Specific Modeling Languages (DSMLs) continues to gain popularity, we have developed new ways to execute DSML models. The most popular approach is to execute code resulting from a model-to-code transformation. An alternative approach is to directly execute these models using a semantic-rich execution engine – Domain-Specific Virtual Machine (DSVM). The DSVM includes a middleware layer responsible for the delivery of services in a given domain.ObjectiveWe will investigate an approach that performs the dynamic combination of constructs in the middleware layer of DSVMs to support the delivery of domain-specific services. This middleware should provide: (a) a model of execution (MoE) that dynamically integrates decoupled domain-specific knowledge (DSK) for service delivery, (b) runtime adaptability based on context and available resources, and (c) the same level of operational assurance as any DSVM middleware.MethodOur approach will involve (1) defining a framework that supports the dynamic combination of MoE and DSK and (2) demonstrating the applicability of our framework in the DSVM middleware for user-centric communication. We will measure the overhead of our approach and provide a cost-benefit analysis factoring in its runtime adaptability using appropriate experimentation.ResultsOur experiments show that combining the DSK and MoE for a DSVM middleware allow us to realize efficient specialization while maintaining the required operability. We also show that the overhead introduced by adaptation is not necessarily deleterious to overall performance in a domain as it may result in more efficient operation selection.ConclusionThe approach defined for the DSVM middleware allows for greater flexibility in service delivery while reducing the complexity of application development for the user. These benefits are achieved at the expense of increased execution times, however this increase may be negligible depending on the domain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号