Large scale evolutionary optimization using cooperative coevolution

发布时间 : 星期三 文章Large scale evolutionary optimization using cooperative coevolution更新完毕开始阅读

InformationSciences178(2008)2985–2999ContentslistsavailableatScienceDirect

InformationSciencesjournalhomepage:www.elsevier.com/locate/insLargescaleevolutionaryoptimizationusingcooperativecoevolutionqZhenyuYanga,KeTanga,*,XinYaoa,bNatureInspiredComputationandApplicationsLaboratory(NICAL),DepartmentofComputerScienceandTechnology,UniversityofScienceandTechnologyofChina,Hefei,Anhui230027,ChinabTheCentreofExcellenceforResearchinComputationalIntelligenceandApplications(CERCIA),SchoolofComputerScience,TheUniversityofBirmingham,Edgbaston,BirminghamB152TT,UKaarticleinfoabstract

Evolutionaryalgorithms(EAs)havebeenappliedwithsuccesstomanynumericalandcombinatorialoptimizationproblemsinrecentyears.However,theyoftenlosetheireffec-tivenessandadvantageswhenappliedtolargeandcomplexproblems,e.g.,thosewithhighdimensions.Althoughcooperativecoevolutionhasbeenproposedasapromisingframe-workfortacklinghigh-dimensionaloptimizationproblems,onlylimitedstudieswerereportedbydecomposingahigh-dimensionalproblemintosinglevariables(dimensions).Suchmethodsofdecompositionoftenfailedtosolvenonseparableproblems,forwhichtightinteractionsexistamongdifferentdecisionvariables.Inthispaper,weproposeanewcooperativecoevolutionframeworkthatiscapableofoptimizinglargescalenonsep-arableproblems.Arandomgroupingschemeandadaptiveweightingareintroducedinproblemdecompositionandcoevolution.Insteadofconventionalevolutionaryalgorithms,anoveldifferentialevolutionalgorithmisadopted.Theoreticalanalysisispresentedinthispapertoshowwhyandhowthenewframeworkcanbeeffectiveforoptimizinglargenon-separableproblems.Extensivecomputationalstudiesarealsocarriedouttoevaluatetheperformanceofnewlyproposedalgorithmonalargenumberofbenchmarkfunctionswithupto1000dimensions.Theresultsshowclearlythatourframeworkandalgorithmareeffectiveaswellasef?cientforlargescaleevolutionaryoptimisationproblems.Weareunawareofanyotherevolutionaryalgorithmsthatcanoptimize1000-dimensionnonsep-arableproblemsaseffectivelyandef?cientlyaswehavedone.ó2008ElsevierInc.Allrightsreserved.Articlehistory:Received10June2007Receivedinrevisedform14November2007Accepted25February2008Keywords:GlobaloptimizationHigh-dimensionaloptimizationEvolutionaryalgorithmsCooperativecoevolutionDifferentialevolution1.IntroductionEvolutionaryoptimizationhasachievedgreatsuccessonmanynumericalandcombinatorialoptimizationproblemsinrecent years [14]. However, most evolutionary algorithms (EAs) still suffer from the ‘‘curse of dimensionality”, which implies that their performance deteriorates rapidly as the dimensionality of the search space increases [22]. EAs that perform well on low-dimension problems often fail to ?nd good near optimal solutions to high-dimensional problems. Some initial efforts in tackling large high-dimensional problems made unrealistic assumptions that the problem is separable. This paper will focus on nonseparable functions since separable functions can easily be decomposed into small subproblems and are not as high-dimensional as they appear.

qPartofthisworkwaspublishedintheProceedingsofthe2007IEEECongressonEvolutionaryComputation[25].*Correspondingauthor.E-mailaddresses:zhyuyang@mail.ustc.edu.cn(Z.Yang),ketang@ustc.edu.cn(K.Tang),x.yao@cs.bham.ac.uk(X.Yao).0020-0255/$-seefrontmatteró2008ElsevierInc.Allrightsreserved.doi:10.1016/j.ins.2008.02.0172986Z.Yangetal./InformationSciences178(2008)2985–2999Cooperativecoevolution[9,6]hasbeenproposedtosolvelargeandcomplexproblemsthroughproblemdecomposition.Itcanberegardedasanautomaticapproachtoimplementingthedivide-and-conquerstrategy.Theoriginalframeworkofcooperativecoevolution(CC)[10]foroptimisationcanbesummarisedasfollows:Problemdecomposition:Decomposeahigh-dimensionalobjectivevectorintosmallersubcomponentsthatcanbehan-dledbyconventionalEAs.Subcomponentoptimization:EvolveeachsubcomponentseparatelyusingacertainEA.Subcomponentscoadaptation:Sinceinterdependenciesmayexistbetweensubcomponents,coadaptation(i.e.,coevolu-tion)isessentialincapturingsuchinterdependenciesduringoptimization.A critical step in the above framework is problem decomposition [11]. An ideal CC algorithm should decompose a large problem into subcomponents where the interdependencies among different subcomponents are minimal. Existing algo-rithms [6,10,15,16] originated from the above CC framework used two simple problem decomposition methods, i.e., the one-dimensional based and splitting-in-half strategies. The one-dimensional based strategy decomposes a high-dimen-sional vector into single variables. In other words, an n-dimensional problem would be decomposed into n one-dimensional problems. While this strategy is very simple and effective for separable functions, it did not consider interdependencies among variables for nonseparable problems. As a result, it is unable to tackle nonseparable problems satisfactorily. The splitting-in-half strategy always decompose a high-dimensional vector into two equal halves and thus reducing an n-dimensional problem into two n-dimensional problems. If n is large, the n-dimensional problems would still be very large 22and challenging to solve. Although one might be able to apply the splitting-in-half strategy recursively, no work has been reported so far. It is unclear when and how interdependencies among different variables can be captured for nonseparable problems.

Inadditiontothelackofexplicitconsiderationofvariableinterdependenciesfornonseparableproblems,existingCC-basedalgorithmsstilluseoutdatedEAsasthesubcomponentoptimizers,whilemoreeffectiveEAshavebeenproposedinrecentyears.TheperformanceofCCoptimizationalgorithmsonhigh-dimensionalproblemscanbeenhancedsigni?cantlybyadoptingmoreadvancedEAs.WiththeexceptionofFEPCC[6],theexistingCC-basedalgorithmswereappliedtoproblemswithonlyupto100dimen-sions,whicharestillrelativelysmallformanyreal-worldproblems.NoworkhasbeenreportedintheliteratureforanyEAstotacklenonseparableproblemsofupto1000dimensions.This paper introduces a new problem decomposition strategy, i.e., the grouping based strategy, in order to better capture the variable interdependencies for nonseparable problems [25]. A simple yet effective decomposition methods using this strategy is presented. An adaptive weighting strategy is also introduced in order to strengthen further coadaptation among decomposed subcomponents when they are interdependent. The idea behind our new algorithm is to optimize a group of interacting variables together (as a subcomponent), rather than splitting them into different subcomponents. Ideally, each subcomponent should consist of tightly interacting variables while the interaction among subcomponents should be weak. Our grouping and adaptive weighting strategies are attempts towards this direction.(话说不还是划分?)

In addition to the decomposition strategies, this paper also introduces a new differential evolution (DE) variant, SaNSDE, as the base optimizer for subcomponents. The key feature of SaNSDE is incorporation of self-adaptive neighbourhood search into DE. SaNSDE has been shown to perform very well, without any coevolution, on a large number of benchmark functions. The results in this paper shows further that SaNSDE can also help coevolutionary optimization signi?cantly. In fact, our new algorithm is able to tackle nonseparable problems with up to 1000 dimensions.

Althoughournewalgorithmshaveshownsuperiorperformanceonmanybenchmarkfunctions,ourdecompositionstrat-egiesarestillrathersimpleandcanbeimprovedfurther.Extensiveanalysesandcomputationalstudieshavebeencarriedoutandreportedinthispaper,whichexplainwhy,howandwhenournewalgorithmsworkwell.Therestofthispaperisorganizedasfollows.Section2givesabriefreviewofcooperativecoevolution.Section3presentsournewCCframeworkandanalyzesitseffectiveness.Ourgroupingandadaptiveweightingstrategieswillbedescribed.The-oreticalanalysisisgiventoexplainwhyandhowsuchsimplestrategiescanworkwell.Section4introducestheDEwithself-adaptiveneighbourhoodsearch—SaNSDE.Section5describesourcomputationalstudiesandpresentstheexperimentalre-sultsonadiversesetofnonseparablefunctionswithupto1000dimensions.Finally,Section6concludesthispaperwithsomeremarksandfutureresearchdirections.2.Cooperativecoevolution‘‘Asevolutionaryalgorithmsareappliedtothesolutionofincreasinglycomplexsystems,explicitnotionsofmodularitymustbeintroducedtoprovidereasonableopportunitiesforsolutionstoevolveintheformofinteractingcoadaptedsubcom-ponents”[9].Examplesofthisshowupintheneedforrulehierarchiesinclassi?ersystemsandsubroutinesingeneticpro-gramming[10].CCisageneralframeworkforapplyingEAstolargeandcomplexproblemsusingadivide-and-conquerstrategy.InCC,theobjectivesystem(suchasavector)isdecomposedintosmallermodulesandeachofthemisassignedtoaspecies(i.e.subpopulation).Thespeciesareevolvedmostlyseparatelywiththeonlycooperationhappeningduring?t-nessevaluation.Z.Yangetal./InformationSciences178(2008)2985–29992987Theoriginalcooperativecoevolution(CC)frameworkforhigh-dimensionaloptimizationcanbesummarizedasfollows[6,10,15,16]:(1) Decompose an objective vector into m low-dimensional subcomponents.(2) Set i ? 1 to start a new cycle.

(3) Optimize the ith subcomponent with a certain EA for a prede?ned number of ?tness evaluations (FEs).(4) If i < m then i tt, and go to Step 3.

(5) Stop if halting criteria are satis?ed; otherwise go to Step 2 for the next cycle.

Here a cycle consists of one complete evolution of all subcomponents. The main idea is to decompose a high-dimensional problem into some low-dimensional subcomponents and evolve these subcomponents cooperatively for a prede?ned num-ber of cycles. The cooperation occurs only during ?tness evaluation. The size of each subcomponent should be within the optimization ability of the EA used.

Potter applied CC to concept learning and neural network construction [9]. García-Pedrajas et al. proposed COVNET, which is a new CC model for evolving arti?cial neural networks [4]. In the domain of learning multiagent behaviors, CC rep-resents a natural approach to applying traditional evolutionary computation [7,8]. Besides, CC has also been employed in real-world applications, e.g. Cao et al. adopt a CC-based approach in their pedestrian detection system [2]. High-dimensional optimization is another appropriate application of CC. Several CC EAs have been proposed to optimize high-dimensional problems [6,10,15,16]. These algorithms were very successful in optimizing problems with independent variables. These ability in dealing with nonseparable problems were somewhat limited.

As pointed out in Section 1, existing CC algorithms for optimization suffer from three major shortcomings. First, the decomposition strategies did not take into account variable interdependencies for nonseparable problems. Second, the base optimizer (e.g., an EA) was out of date. Third, the CC algorithms were able to deal with problems with only up to 100 dimensions.

3.ThenewframeworkwithgroupingandadaptiveweightingWeproposeanewCCframeworkwithagroup-basedproblemdecompositionstrategy.Inaddition,thenewCCframe-workisdesignedtochangegroupingstructuresdynamically,whichwillincreasethechanceofoptimizinginteractingvari-ablestogether(andthusmoreeffectively).Moreover,wemaintaincoadaptationamongdifferentgroupsforoverallcontrolling.Moredetailsaregivenbelow.Thecrucialideaofournewgroup-basedframeworkistosplitanobjectivevectorintoms-dimensionalsubcomponents(assumingn?m?s),andthenevolveeachofthemwithanEA.Forcoadaptation,weproposeanadaptiveweightingstrategy,whichappliesaweighttoeachofthesesubcomponentsaftereachcycle,andevolvestheweightvectorwithacertainoptimizer.Thekeystepsoftheframeworkcanbesummarizedasfollows:(1)Seti?1tostartanewcycle.(2)Splitann-dimensionalobjectvectorintomsubcomponents(s-dimensional)randomly,i.e.n?m?s.Here‘‘randomly”meansthateachvariablehasthesamechancetobeassignedintoanyofthesubcomponents.(3)OptimizetheithsubcomponentwithacertainEAforaprede?nednumberofFEs.(4)Ifi

The motivation behind the adaptive weighting strategy is to provide coadaptation over all subcomponents when they are interdependent. After each cycle, we apply a weight to each of these subcomponents. These weights will build up a weight vector. For any member of a population, there exists an optimal weight vector. The determination of the optimal weight vec-tor is an optimization problem itself. Fortunately, this optimization problem has a much lower dimensionality than the ori-ginal one, and thus can usually be handled by existing EAs. Why and how the adaptive weighting strategy works is further demonstrated as follows:

(1)Foranyindividual,x?ex1;x2;...;xnT,ofapopulation,itistruethat:fexT??fewcáxTwherewc?e1;1;...;1Tindicatesaconstantweightvector.2988Z.Yangetal./InformationSciences178(2008)2985–2999(2)Toobtainbetter?tnessvalue,wecanapplyaweightwitoeachcomponentofx,andthenoptimizetheweightvector.Soweachieve:minfewáxT6fewcáxT??fexTwwherew?ew1;w2;...;wnTistheweightvectorovertheindividualx.(3)However,optimizingtheweightvectorwisashardastooptimizetheoriginalindividualx,sincetheyareinthesamedimension.ButDECC-Gsplitsthen-dimensionalvectorxintom(m(n)subcomponents.Sowecanalternativelyapplyaweighttoeachofthesesubcomponents,andthusweonlyneedtooptimizeamuchlowerdimensionalvector~?ew~2;...;w~mT:~1;wwfew0áxT6fexTminfewáxT6min0ww~1;...;w~1;w~2;...;w~2;...;w~m;...;w~mT,withsdenotesthedimensionofeachsubcomponentandmde-wherew0?ew|???????{z???????}|???????{z???????}|????????{z????????}sssnotesthenumberofsubcomponents(assumingn?m?s).Sotheadaptiveweightingstrategyworkslikeatradeoffbetweenoptimizinghigh-dimensionalvectorwandnoweightingatall.Further,sincethevariablesofasubcomponentiscontrolledintegrallybychangingtheweightofit,theprocessofopti-mizingtheweightvectorcanalsobeviewedasacoarsecoadaptationoverallsubcomponents.3.1.WhyandHowwellEACC-GworksInspiteofmanydiscussionsofseparableandnonseparablefunctionsintheliterature,thereisnoclearde?nitiontodis-tinguishbetweenseparableandnonseparableproblems.Basedonthedescriptioninarecentsuiteofbenchmarkfunctions[19],weprovidethefollowingde?nitioninthispaper.De?nition1.fexTiscalledaseparablefunctionif8k2f1;ngandx2S;x?ex1;...;xk;...;xnTx02S;x0?ex1;...;x0k;...;xnTimply8y2S;8y2S;0??)fexT

Theorem1.TheprobabilityofEACC-Gtoassigntwointeractingvariablesxiandxjintoasinglesubcomponentforatleastkcyclesis:??????r????NàrN??XN11Pk?1àmmrr?kwhereNisthetotalnumberofcyclesandmisthenumberofsubcomponents.Proof1.Firstly,ineachseparatecycle,theprobabilityofEACC-Gtoassignxiandxjintothesamesubcomponentis:????m11p??:2mme3T

联系合同范文客服:xxxxx#qq.com(#替换为@)