Volume 4, Number 3, 2007
Special Issue on Theory and Applications of Computational Intelligence (pp.217-303)
Rough set theory provides a useful mathematical foundation for developing automated computational systems that can help understand and make use of imperfect knowledge.Despite its recency,the theory and its extensions have been widely applied to many problems,including decision analysis,data mining,intelligent control and pattern recognition.This paper presents an outline of the basic concepts of rough sets and their major extensions,covering variable precision,tolerance and fuzzy rough sets.It also shows the diversity of successful applications these theories have entailed,ranging from financial and business,through biological and medicine,to physical,art,and meteorological.
In this paper,an interactive image enhancement(IIE)technique based on fuzzy relaxation is presented,which allows the user to select different intensity levels for enhancement and intermit the enhancement process according to his/her preference in applications.First,based on an analysis of the convergence of a fuzzy relaxation algorithm for image contrast enhancement,an improved version of this algorithm,which is called FuzzIIE Method 1,is suggested by deriving a relationship between the convergence regions and the parameters in the transformations defined in the algorithm.Then a method called FuzzIIE Method 2 is introduced by using a different fuzzy relaxation function,in which there is no need to re-select the parameter values for interactive image enhancement. Experimental results are presented demonstrating the enhancement capabilities of the proposed methods under different conditions.
An improved half-vehicle model has been proposed for active suspension control systems,in contrast to existing models, it allows to explore the nature of the effect of vehicle speed changes by introducing a state vector of vehicle pitch angle.Three control strategies of linear quadratic control (LQ),improved LQ (ILQ) and wheelbase preview LQ (WLQ) have been implemented into the proposed model.ILQ has integrated an additional control parameter into LQ by concerning the correlation between acceleration values and their corresponding pitch angles.Simulation results have showed the effectiveness of the proposed model in terms of LQ,ILQ and WLQ control strategies.
Dynamic optimization problems are a kind of optimization problems that involve changes over time.They pose a serious challenge to traditional optimization methods as well as conventional genetic algorithms since the goal is no longer to search for the optimal solution(s) of a fixed problem but to track the moving optimum over time.Dynamic optimization problems have attracted a growing interest from the genetic algorithm community in recent years.Several approaches have been developed to enhance the performance of genetic algorithms in dynamic environments.One approach is to maintain the diversity of the population via random immigrants.This paper proposes a hybrid immigrants scheme that combines the concepts of elitism,dualism and random immigrants for genetic algorithms to address dynamic optimization problems.In this hybrid scheme,the best individual,i.e.,the elite,from the previous generation and its dual individual are retrieved as the bases to create immigrants via traditional mutation scheme.These elitism-based and dualism-based immigrants together with some random immigrants are substituted into the current population,replacing the worst individuals in the population.These three kinds of immigrants aim to address environmental changes of slight,medium and significant degrees respectively and hence efficiently adapt genetic algorithms to dynamic environments that are subject to different severities of changes.Based on a series of systematically constructed dynamic test problems,experiments are carried out to investigate the performance of genetic algorithms with the hybrid immigrants scheme and traditional random immigrants scheme.Experimental results validate the efficiency of the proposed hybrid immigrants scheme for improving the performance of genetic algorithms in dynamic environments.
Fault diagnosis of liquid rocket propulsion systems (LRPSs) is a very important issue in space launch activities particularly when manned space missions are accompanied,since the safety and reliability can be significantly enhanced by exploiting an efficient fault diagnosis system.Currently,inverse problem-based diagnosis has attracted a great deal of research attention in fault diagnosis domain.This methodology provides a new strategy to model-based fault diagnosis for monitoring the health of propulsion systems. To solve the inverse problems arising from the fault diagnosis of LRPSs,GAs have been adopted in recent years as the first and effective choice of available numerical optimization tools.However,the GA has many control parameters to be chosen in advance and there still lack sound theoretical tools to analyze the effects of these parameters on diagnostic performance analytically.In this paper a comparative study of the influence of GA parameters on diagnostic results is conducted by performing a series of numerical experiments. The objective of this study is to investigate the contribution of individual algorithm parameter to final diagnostic result and provide reasonable estimates for choosing GA parameters in the inverse problem-based fault diagnosis of LRPSs.Some constructive remarks are made in conclusion and will be helpful for the implementation of GA to the fault diagnosis practice of LRPSs in the future.
This paper presents a Markov random field (MRF) approach to estimating and sampling the probability distribution in populations of solutions.The approach is used to define a class of algorithms under the general heading distribution estimation using Markov random fields (DEUM).DEUM is a subclass of estimation of distribution algorithms (EDAs) where interaction between solution variables is represented as an undirected graph and the joint probability of a solution is factorized as a Gibbs distribution derived from the structure of the graph.The focus of this paper will be on describing the three main characteristics of DEUM framework,which distinguishes it from the traditional EDA.They are:1) use of MRF models,2) fitness modeling approach to estimating the parameter of the model and 3) Monte Carlo approach to sampling from the model.
This paper summaries our recent work on combining estimation of distribution algorithms (EDA) and other techniques for solving hard search and optimization problems:a) guided mutation,an offspring generator in which the ideas from EDAs and genetic algorithms are combined together,we have shown that an evolutionary algorithm with guided mutation outperforms the best GA for the maximum clique problem,b)evolutionary algorithms refining a heuristic,we advocate a strategy for solving a hard optimization problem with complicated data structure,and c) combination of two different local search techniques and EDA for numerical global optimization problems,its basic idea is that not all the new generated points are needed to be improved by an expensive local search.
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms,such as the (1+1)-EA,on toy problems.These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems.In fact,in recent years,it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-baeed EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines.The most common mathematical techniques are introduced,the basic ideas behind them are discussed and their elective applications are highlighted.Solved problems that were still open are enumerated as are those still awaiting for a solution.New questions and problems arisen in the meantime are also considered.
Dimensionality reduction and data visualization are useful and important processes in pattern recognition.Many techniques have been developed in the recent years.The self-organizing map (SOM) can be an efficient method for this purpose.This paper reviews recent advances in this area and related approaches such as multidimensional scaling (MDS),nonlinear PCA,principal manifolds,as well as the connections of the SOM and its recent variant,the visualization induced SOM (ViSOM),with these approaches. The SOM is shown to produce a quantized,qualitative scaling and while the ViSOM a quantitative or metric scaling and approximates principal curve/surface.The SOM can also be regarded as a generalized MDS to relate two metric spaces by forming a topological mapping between them.The relationships among various recently proposed techniques such as ViSOM,Isomap,LLE,and eigenmap are discussed and compared.
This paper deals with the problem of delay-dependent robust stability for a class of switched Hopfield neural networks with time-varying structured uncertainties and time-varying delay.Some Lyapunov-Krasovskii functionals are constructed and the linear matrix inequality(LMI)approach and free weighting matrix method are employed to devise some delay-dependent stability criteria which guarantee the existence,uniqueness and global exponential stability of the equilibrium point for all admissible parametric uncertainties.By using Leihniz-Newton formula,free weighting matrices are employed to express this relationship,which implies that the new criteria are less conservative than existing ones.Some examples suggest that the proposed criteria are effective and are an improvement over previous ones.
This paper presents a new distributed positioning algorithm for unknown nodes in a wireless sensor network.The algorithm is based exclusively on connectivity.First,assuming that the positions of the anchor nodes are already known,a circular belt containing an unknown node is obtained using information about the anchor nodes that are in radio range of the unknown node,based on the geometric relationships and communication constraints among the unknown node and the anchor nodes.Then,the centroid of the circular belt is taken to be the estimated position of the unknown node.Since the algorithm is very simple and since the only communication needed is between the anchor nodes and the unknown node,the communication and computational loads are very small.Furthermore,the algorithm is robust because neither the failure of old unknown nodes nor the addition of new unknown nodes influences the positioning of unknown nodes to be located.A theoretical analysis and simulation results show that the algorithm does not produce any cumulative error and is insensitive to range error,and that a change in the number of sensor nodes does not affect the communication or computational load.These features make this algorithm suitable for all sizes of low-power wireless sensor networks.