共查询到20条相似文献,搜索用时 31 毫秒
1.
P. Benvenuti D. Vivona M. Divari 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2000,4(4):210-213
In an axiomatic way a divergence between fuzzy sets is introduced which extends the symmetric difference between crisp sets.
Any fuzzy measure of the divergence between two fuzzy sets weighs their “distance”. The distance between a fuzzy set and the
family of crisp sets is fuzziness measure. 相似文献
2.
Generalized fuzzy bi-ideals of semigroups 总被引:1,自引:0,他引:1
Osman Kazancı Sultan Yamak 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2008,12(11):1119-1124
After the introduction of fuzzy sets by Zadeh, there have been a number of generalizations of this fundamental concept. The
notion of (∈, ∈ ∨q)-fuzzy subgroups introduced by Bhakat is one among them. In this paper, using the relations between fuzzy points and fuzzy
sets, the concept of a fuzzy bi-ideal with thresholds is introduced and some interesting properties are investigated. The
acceptable nontrivial concepts obtained in this manner are the (∈, ∈ ∨q)-fuzzy bi-ideals and -fuzzy bi-ideals, which are extension of the concept of a fuzzy bi-ideal in semigroup. 相似文献
3.
A linguistic information feed-back-based dynamical fuzzy system (LIFBDFS) with learning algorithm 总被引:1,自引:1,他引:0
In this study, the linguistic information feed-back-based dynamical fuzzy system (LIFBDFS) proposed earlier by the authors
is first introduced. The principles of α-level sets and backpropagation through time approach are also briefly discussed.
We next employ these two methods to derive an explicit learning algorithm for the feedback parameters of the LIFBDFS. With
this training algorithm, our LIFBDFS indeed becomes a potential candidate in solving real-time modeling and prediction problems. 相似文献
4.
Do complexity classes have many-one complete sets if and only if they have Turing-complete sets? We prove that there is a
relativized world in which a relatively natural complexity class—namely, a downward closure of NP, —has Turing-complete sets but has no many-one complete sets. In fact, we show that in the same relativized world this class
has 2-truth-table complete sets but lacks 1-truth-table complete sets. As part of the groundwork for our result, we prove
that has many equivalent forms having to do with ordered and parallel access to NP and NP ∩ coNP.
Received November 1996, and in final form July 1997. 相似文献
5.
Julia sets are considered one of the most attractive fractals and have wide range of applications in science and engineering.
The strong physical meaning of Mandelbrot and Julia sets is broadly accepted and nicely connected by Christian Beck (Physica
D 125(3–4):171–182, 1999) to the complex logistic maps, in the former case, and to the inverse complex logistic map, in the latter. Argyris et al.
(Chaos Solitons Fractals 11(13):2067–2073, 2000) have studied the effect of noise on Julia sets and concluded that Julia sets are stable for noises of low strength, and
a small increment in the strength of noise may cause considerable deterioration in the configuration of the Julia sets. It
is well-known that the method of function iterates plays a crucial role in discrete dynamics utilizing the techniques of fractal
theory. However, recently Rani and Kumar (J. Korea Soc. Math. Edu. Ser. D: Res. Math. Edu. 8(4):261–277, 2004) introduced superior iterations as a generalization of function iterations in the study of Julia sets and studied superior
Julia sets. This technique is further utilized to study effectively new Mandelbrot sets and related properties (see, for instance,
Negi and Rani, Chaos Solitons Fractals 36(2):237–245, 2008; 36(4):1089–1096, 2008, Rani and Kumar, J. Korea Soc. Math. Edu. Ser. D: Res. Math. Edu. 8(4):279–291, 2004). The intent of this paper is to study certain effects of noise on superior Julia sets. We find that the superior Julia sets
are drastically more stable for higher strength of noises than the classical Julia sets. Finally, we make a humble attempt
to discuss some applications of superior orbit in discrete dynamics and of superior Julia sets in particle dynamics. 相似文献
6.
Multidisciplinary collaborative optimization using fuzzy satisfaction degree and fuzzy sufficiency degree model 总被引:1,自引:0,他引:1
Hong-Zhong Huang Ye Tao Yu Liu 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2008,12(10):995-1005
Collaborative optimization (CO) is a bi-level multidisciplinary design optimization (MDO) method for large-scale and distributed-analysis
engineering design problems. Its architecture consists of optimization at both the system-level and autonomous discipline
levels. The system-level optimization maintains the compatibility among coupled subsystems. In many engineering design applications,
there are uncertainties associated with optimization models. These will cause the design objective and constraints, such as
weight, price and volume, and their boundaries, to be fuzzy sets. In addition the multiple design objectives are generally
not independent of each other, that makes the decision-making become complicated in the presence of conflicting objectives.
The above factors considerably increase the modeling and computational difficulties in CO. To relieve the aforementioned difficulties,
this paper proposes a new method that uses a fuzzy satisfaction degree model and a fuzzy sufficiency degree model in optimization
at both the system level and the discipline level. In addition, two fuzzy multi-objective collaborative optimization strategies
(Max–Min and α-cut method) are introduced. The former constructs the sufficiency degree for constraints and the satisfaction
degree for design objectives in each discipline respectively, and adopts the Weighted Max–Min method to maximize an aggregation
of them. The acceptable level is set up as the shared design variable between disciplines, and is maximized at the system
level. In the second strategy, the decision-making space of the constraints is distributed in each discipline independently
through the allocation of the levels of α. At the system level, the overall satisfaction degree for all disciplines is finally
maximized. The illustrative mathematical example and engineering design problem are provided to demonstrate the feasibility
of the proposed methods. 相似文献
7.
F. Blanchet-Sadri N. C. Brownstein Andy Kalcic Justin Palumbo T. Weyand 《Theory of Computing Systems》2009,45(2):381-406
The notion of an unavoidable set of words appears frequently in the fields of mathematics and theoretical computer science,
in particular with its connection to the study of combinatorics on words. The theory of unavoidable sets has seen extensive
study over the past twenty years. In this paper we extend the definition of unavoidable sets of words to unavoidable sets
of partial words. Partial words, or finite sequences that may contain a number of “do not know” symbols or “holes,” appear
naturally in several areas of current interest such as molecular biology, data communication, and DNA computing. We demonstrate
the utility of the notion of unavoidability of sets of partial words by making use of it to identify several new classes of
unavoidable sets of full words. Along the way we begin work on classifying the unavoidable sets of partial words of small
cardinality. We pose a conjecture, and show that affirmative proof of this conjecture gives a sufficient condition for classifying
all the unavoidable sets of partial words of size two. We give a result which makes the conjecture easy to verify for a significant
number of cases. We characterize many forms of unavoidable sets of partial words of size three over a binary alphabet, and
completely characterize such sets over a ternary alphabet. Finally, we extend our results to unavoidable sets of partial words
of size k over a k-letter alphabet.
This material is based upon work supported by the National Science Foundation under Grant No. DMS-0452020. Part of this paper
was presented at DLT’07 [4]. We thank the referees as well as Robert Mercaş and Geoffrey Scott for very valuable comments and suggestions. World Wide
Web server interfaces have been established at and for automated use of the programs. 相似文献
8.
Correcting design decay in source code is not a trivial task. Diagnosing and subsequently correcting inconsistencies between a software system’s code and its design rules (e.g., database queries are only allowed in the persistence layer) and coding conventions can be complex, time-consuming and error-prone. Providing support for this process is therefore highly desirable, but of a far greater complexity than suggesting basic corrective actions for simplistic implementation problems (like the “declare a local variable for non-declared variable” suggested by Eclipse).We present an abductive reasoning approach to inconsistency correction that consists of (1) a means for developers to document and verify a system’s design and coding rules, (2) an abductive logic reasoner that hypothesizes possible causes of inconsistencies between the system’s code and the documented rules and (3) a library of corrective actions for each hypothesized cause. This work builds on our previous work, where we expressed design rules as equality relationships between sets of source code artifacts (e.g., the set of methods in the persistence layer is the same as the set of methods that query the database). In this paper, we generalize our approach to design rules expressed as user-defined binary relationships between two sets of source code artifacts (e.g., every state changing method should invoke a persistence method).We illustrate our approach on the design of IntensiVE, a tool suite that enables defining sets of source code artifacts intensionally (by means of logic queries) and verifying relationships between such sets. 相似文献
9.
Jie Zhou Duoqian Miao 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2011,15(8):1643-1656
The differences of attribute reduction and attribute core between Pawlak’s rough set model (RSM) and variable precision rough
set model (VPRSM) are analyzed in detail. According to the interval properties of precision parameter β with respect to the
quality of classification, the definition of attribute reduction is extended from a specific β value to a specific β interval
in order to overcome the limitations of traditional reduct definition in VPRSM. The concept of β-interval core is put forward
which will enrich the methodology of VPRSM. With proposed ordered discernibility matrix and relevant interval characteristic
sets, a heuristic algorithm can be constructed to get β-interval reducts. Furthermore, a novel method, with which the optimal
interval of precision parameter can be determined objectively, is introduced based on shadowed sets and an evaluation function
is also given for selecting final optimal β-interval reduct. All the proposed notions in this paper will promote the development
of VPRSM both in theory and practice. 相似文献
10.
John A. Doucette Andrew R. McIntyre Peter Lichodzijewski Malcolm I. Heywood 《Genetic Programming and Evolvable Machines》2012,13(1):71-101
Classification under large attribute spaces represents a dual learning problem in which attribute subspaces need to be identified
at the same time as the classifier design is established. Embedded as opposed to filter or wrapper methodologies address both
tasks simultaneously. The motivation for this work stems from the observation that team based approaches to Genetic Programming
(GP) have the potential to design multiple classifiers per class—each with a potentially unique attribute subspace—without
recourse to filter or wrapper style preprocessing steps. Specifically, competitive coevolution provides the basis for scaling
the algorithm to data sets with large instance counts; whereas cooperative coevolution provides a framework for problem decomposition
under a bid-based model for establishing program context. Symbiosis is used to separate the tasks of team/ensemble composition
from the design of specific team members. Team composition is specified in terms of a combinatorial search performed by a
Genetic Algorithm (GA); whereas the properties of individual team members and therefore subspace identification is established
under an independent GP population. Teaming implies that the members of the resulting ensemble of classifiers should have
explicitly non-overlapping behaviour. Performance evaluation is conducted over data sets taken from the UCI repository with 649–102,660 attributes and
2–10 classes. The resulting teams identify attribute spaces 1–4 orders of magnitude smaller than under the original data set.
Moreover, team members generally consist of less than 10 instructions; thus, small attribute subspaces are not being traded
for opaque models. 相似文献
11.
Design patterns have been introduced as a medium to capture and disseminate the best design knowledge and practices. In the
field of human–computer interaction, practitioners and researchers have explored different avenues to use patterns and pattern
languages as design tools. This paper surveys these avenues—from individual pattern use for solving a specific design problem,
to pattern-oriented design, which guides designers in building a conceptual design by leveraging relationships between patterns.
One of our underlying goals is to investigate how patterns can be used, not only to foster the reuse of proven and valid design
solutions, but also as a central artefact in the process of deriving a design from user experiences and requirements. We will
present our investigations on pattern-based design, and discuss how user experiences can be incorporated in the pattern selection
process through the use of user variables, pattern attributes and associated relationships. 相似文献
12.
A Short-Term Forecasting Algorithm for Network Traffic Based on Chaos Theory and SVM 总被引:1,自引:0,他引:1
Xingwei Liu Xuming Fang Zhenhua Qin Chun Ye Miao Xie 《Journal of Network and Systems Management》2011,19(4):427-447
Recently, the forecasting technologies for network traffic have played a significant role in network management, congestion
control and network security. Forecasting algorithms have also been investigated for decades along with the development of
Time Series Analysis (TSA). Chaotic Time Series Analysis (CTSA) may be used to model and forecast the time series by Chaos
Theory. As one of the prevailing intelligent forecasting algorithms, it is worthwhile to integrate CTSA and Support Vector
Machine (SVM). In this paper, after the vulnerabilities of Local Support Vector Machine (LSVM) in forecasting modeling are
analyzed, the Dynamic Time Wrapping (DTW) and the “Dynamic K” strategy are introduced, as well as a short-term network traffic
forecasting algorithm LSVM-DTW-K based on Chaos Theory and SVM is presented. Finally, two sets of network traffic datasets
collected from wired and wireless campus networks, respectively, are studied for our experiments. 相似文献
13.
Christian Glaßer Katrin Herr Christian Reitwießner Stephen Travers Matthias Waldherr 《Theory of Computing Systems》2010,46(1):80-103
We investigate the complexity of equivalence problems for {∪,∩,−,+,×}-circuits computing sets of natural numbers. These problems were first introduced by Stockmeyer and Meyer (1973). We
continue this line of research and give a systematic characterization of the complexity of equivalence problems over sets
of natural numbers. Our work shows that equivalence problems capture a wide range of complexity classes like NL, C
=
L, P,Π2P, PSPACE, NEXP, and beyond. McKenzie and Wagner (2003) studied related membership problems for circuits over sets of natural numbers. Our results also have consequences for these membership problems: We provide an
improved upper bound for the case of {∪,∩,−,+,×}-circuits. 相似文献
14.
Sarah Kettley 《AI & Society》2007,22(1):5-14
This paper treats contemporary craft as an under-researched resource for wearable computing, and presents some of the alternative
values and experiences that contemporary craft may be able to contribute to the design of personal technological products.
Through design and implementation of a suite of wirelessly networked ‘Speckled’ jewellery, it considers contemporary craft
for its potential as a critical design resource with especial relevance to wearable computing and the broad development of
this paradigm into the everyday. ‘Critical design’ is given a working definition for the purposes of the argument, and a friendship
group of five women of retirement age introduced as the user group for this research. Current practice in the contemporary
craft genre of jewellery is analysed for its potential as a resource for a critical approach to wearable computing, and based
on a set of semi-structured interviews with contemporary jewellery practitioners, the paper presents a set of propositions
for a critical craft approach to wearables design. 相似文献
15.
Miklós Erdélyi-Szabó László Kálmán Agi Kurucz 《Journal of Logic, Language and Information》2008,17(1):1-17
The paper sets out to offer an alternative to the function/argument approach to the most essential aspects of natural language
meanings. That is, we question the assumption that semantic completeness (of, e.g., propositions) or incompleteness (of, e.g.,
predicates) exactly replicate the corresponding grammatical concepts (of, e.g., sentences and verbs, respectively). We argue
that even if one gives up this assumption, it is still possible to keep the compositionality of the semantic interpretation
of simple predicate/argument structures. In our opinion, compositionality presupposes that we are able to compare arbitrary
meanings in term of information content. This is why our proposal relies on an ‘intrinsically’ type free algebraic semantic
theory. The basic entities in our models are neither individuals, nor eventualities, nor their properties, but ‘pieces of
evidence’ for believing in the ‘truth’ or ‘existence’ or ‘identity’ of any kind of phenomenon. Our formal language contains
a single binary non-associative constructor used for creating structured complex terms representing arbitrary phenomena. We
give a finite Hilbert-style axiomatisation and a decision algorithm for the entailment problem of the suggested system. 相似文献
16.
The physical phases of microsystem design are concerned with generating all data needed to fabricate microstructures. As
lithography-based technologies are used to fabricate MEMS, this includes the design of two-dimensional mask layouts as well
as the design of process step sequences and parameters which determine the object extensions in the third dimension. LIDO
is a MEMS physical design system that supports this concurrent design strategy by providing tools to easily configure appropriate
process sequences, to derive consistent sets of geometric layout design rules from them and to use these design rules to verify
mask layouts.
Received: 11 March August 1996 / Accepted: 1 August 1996 相似文献
17.
Mike Fraser Jon Hindmarsh Katie Best Christian Heath Greg Biegel Chris Greenhalgh Stuart Reeves 《Computer Supported Cooperative Work (CSCW)》2006,15(4):257-279
The design of distributed systems to support collaboration among groups of scientists raises new networking challenges that grid middleware developers are addressing. This field of development work, ‘e-Science’, is increasingly recognising the critical need of understanding the ordinary day-to-day work of doing research to inform design. We have investigated one particular area of collaborative social scientific work – the analysis of video data. Based on interviews and observational studies, we discuss current practices of social scientific work with digital video in three areas: Preparation for collaboration; Control of data and application; and Annotation configurations and techniques. For each, we describe how these requirements feature in our design of a distributed video analysis system as part of the MiMeG project: our security policy and distribution; the design of the control system; and providing freeform annotation over data. Finally, we review our design in light of initial use of the software between project partners; and discuss how we might transform the spatial configuration of the system to support annotation behaviour. 相似文献
18.
Tim B. Kaiser 《Annals of Mathematics and Artificial Intelligence》2010,59(2):169-185
We study the connection between certain many-valued contexts and general geometric structures. The known one-to-one correspondence
between attribute-complete many-valued contexts and complete affine ordered sets is used to extend the investigation to π-lattices, class geometries, and lattices with classification systems. π-lattices are identified as a subclass of complete affine ordered sets, which exhibit an intimate relation to concept lattices
closely tied to the corresponding context. Class geometries can be related to complete affine ordered sets using residuated
mappings and the notion of a weak parallelism. Lattices with specific sets of classification systems allow for some sort of
“reverse conceptual scaling”. 相似文献
19.
A high winding density micro coil technology is reported here which has been used to reduce the power requirements of several
classes of micro actuators. [Christenson (1995)] Actuator design sets the required magnetic field and power. At low frequencies
coil design translates these requirements to a total winding conductor cross-sectional area. The winding fill fraction of
the coil technology sets the final coil size. Using the design equations, coil technologies may be compared.
Received: 25 August 1997 / Accepted: 20 October 1997 相似文献
20.
Stefan Szeider 《Journal of Automated Reasoning》2005,35(1-3):73-88
We study the parameterized complexity of detecting small backdoor sets for instances of the propositional satisfiability problem (SAT). The notion of backdoor sets has been recently introduced
by Williams, Gomes, and Selman for explaining the ‘heavy-tailed’ behavior of backtracking algorithms. If a small backdoor
set is found, then the instance can be solved efficiently by the propagation and simplification mechanisms of a SAT solver.
Empirical studies indicate that structured SAT instances coming from practical applications have small backdoor sets. We study
the worst-case complexity of detecting backdoor sets with respect to the simplification and propagation mechanisms of the
classic Davis–Logemann–Loveland (DLL) procedure. We show that the detection of backdoor sets of size bounded by a fixed integer
k is of high parameterized complexity. In particular, we determine that this detection problem (and some of its variants) is
complete for the parameterized complexity class W[P]. We achieve this result by means of a generalization of a reduction due
to Abrahamson, Downey, and Fellows. 相似文献