Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

download Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

of 7

Transcript of Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

  • 8/12/2019 Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

    1/7

    Solving differential equations with constructed neural networksIoannis G. Tsoulos a, , Dimitris Gavrilis b , Euripidis Glavas aa Department of Communications, Informatics and Management, Technological Educational Institute of Epirus, Greeceb Digital Curation Unit Athena Research Centre, Artemidos 6 & Epidavrou, 15125 Maroussi, Greece

    a r t i c l e i n f o

    Article history:Received 11 June 2008Received in revised form14 October 2008Accepted 18 December 2008Communicated by E.W. LangAvailable online 6 January 2009

    Keywords:Differential equationsNeural networksGenetic programmingGrammatical evolution

    a b s t r a c t

    A novel hybrid method for the solution of ordinary and partial differential equations is presented here.The method creates trial solutions in neural network form using a scheme based on grammaticalevolution. The trial solutions are enhanced periodically using a local optimization procedure. Theproposed method is tested on a series of ordinary differential equations, systems of ordinary differentialequations as well as on partial differential equations with Dirichlet boundary conditions and the resultsare reported.

    & 2008 Elsevier B.V. All rights reserved.

    1. Introduction

    A series of problems in many scientic elds can be modelledwith the use of differential equations such as problems in physics[15] , chemistry [68] , biology [9,10] , economics [11] , etc. Due tothe importance of differential equations many methods have beenproposed in the relevant literature for their solution such as RungeKutta methods [1214] , PredictorCorrector [1517] , radial basisfunctions [18,19] , articial neural networks [2027] , models basedon genetic programming [28,29] , etc. In this article a hybridmethod utilizing constructed feed-forward neural networks bygrammatical evolution and a local optimization procedure is usedin order to solve ordinary differential equations (ODEs), systems of ordinary differential equations (SODEs) and partial differentialequations (PDEs). The constructed neural networks with gram-matical evolution have been recently introduced by Tsoulos et al.

    [30] and it utilizes the well-established grammatical evolutiontechnique [31] to evolve the neural network topology along withthe network parameters. The method has been tested withsuccess on a series of data-tting and classications problems.In this article the constructed neural network methodology isapplied on a series of differential equations while preserving theinitial or boundary conditions using penalization. The proposedmethod does not require the user to enter any informationregarding the topology of the network. Also, the new method canbe used to solve either ODEs or PDEs and it can be easily

    parallelized. This idea is similar to the cascade correlation neuralnetworks introduced by Fahlman and Lebiere [32] in which the

    user is not required to enter any topology information. However,the method for selecting the network topology differs since theproposed algorithm is a stochastic one. In the proposed method,the advantage of using an evolutionary algorithm is that thepenalty function (used for initial or boundary conditions) can beincorporated easily into the training process.

    The rest of this article is organized as follows: in Section 2 abrief description of the grammatical evolution algorithm is givenfollowed by an analytical description of the proposed method, inSection 3 the test functions used in the experiments followed bythe experimental results are outlined and in Section 4 someconclusions are derived.

    2. Method description

    In this section a brief description of the grammatical evolutionalgorithm is given. The main steps of the proposed algorithm areoutlined with the steps for the tness evaluation for the cases of ODEs, SODEs and PDEs.

    2.1. Grammatical evolution

    Grammatical evolution is an evolutionary technique that canproduce code in any programming language requiring thegrammar of the target language in BNF syntax and some propertness function. This technique has been used with success inmany scientic elds such as symbolic regression [34] , discovery

    ARTICLE IN PRESS

    Contents lists available at ScienceDirect

    journal homepage: www.elsevier.com/locate/neucom

    Neurocomputing

    0925-2312/$- see front matter & 2008 Elsevier B.V. All rights reserved.doi: 10.1016/j.neucom.2008.12.004

    Corresponding author.E-mail address: [email protected] (I.G. Tsoulos).

    Neurocomputing 72 (2009) 23852391

    http://www.sciencedirect.com/science/journal/neucomhttp://www.elsevier.com/locate/neucomhttp://localhost/var/www/apps/conversion/tmp/scratch_5/dx.doi.org/10.1016/j.neucom.2008.12.004mailto:[email protected]:[email protected]://localhost/var/www/apps/conversion/tmp/scratch_5/dx.doi.org/10.1016/j.neucom.2008.12.004http://www.elsevier.com/locate/neucomhttp://www.sciencedirect.com/science/journal/neucom
  • 8/12/2019 Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

    2/7

    of trigonometric identities [35] , robot control [36] , cachingalgorithms [37] , nancial prediction [38] , etc. Chromosomes ingrammatical evolution, in contrast to classical genetic program-ming [33] , are not expressed as parse trees, but as vectors of integers. Each integer denotes a production rule from the givenBNF grammar. The algorithm starts from the start symbol of thegrammar and gradually creates the program string, by replacing

    non-terminal symbols with the right hand of the selectedproduction rule. The selection is performed in two steps:

    Read an element from the chromosome (with value V). Select the rule according to the scheme

    RULE V mod R (1)where R is the number of rules for the specic non-terminalsymbol. The process of replacing non-terminal symbols with theright hand of production rules is continued until either a fullprogram has been generated or the end of chromosome has beenreached. In the latter case we can reject the entire chromosome orwe can start over (wrapping event) from the rst element of thechromosome. If the limit of the wrapping events is reached the

    chromosome is rejected by assigning to it a large tness value,which prevents the chromosome to be used in the crossoverprocedure. In the proposed algorithm the limit of wrapping eventswas set to 2. As an example of the mapping procedure of thegrammatical evolution consider the BNF grammar shown in Fig. 1.The number in parentheses denotes the sequence number of thecorresponding production rule to be used in the mappingprocedure. Consider the chromosome x 9 ; 8 ; 6; 4 ; 16 ; 10 ; 17 ;23 ; 8 ; 14 : The steps of the mapping procedure are listed in Table 1 .The nal outcome of these steps is the expression x2 cos x3.

    2.2. Algorithm description

    The proposed method is based on an evolutionary algorithm, a

    stochastic process whose basis lies in the biological evolution. Inorder to measure the efciency of the algorithm, a neural networkcapable of solving differential equations is employed. The neuralnetworks efciency is used as the tness of the evolutionary

    algorithm along with a penalty function which is used in order torepresent the boundary or initial conditions of the differentialequations. The idea of combining a neural network with anevolutionary algorithm is a well-established approach that hasbeen used numerous times both in the bibliography and in real-world applications. The main steps of the algorithm are as follows:

    1. Set the number of chromosomes S , the number of maximumgenerations allowed K , the crossover rate pc , the mutation rate pm , a small positive number the integer parameter G and theinteger parameter M . The parameter G determines howfrequently the local search procedure will be applied and theparameter M determines in how many chromosomes the localoptimization procedure will be applied.

    2. Set iters 0.3. Initialize the S chromosomes. Each chromosome will be

    mapped to a neural network using a procedure describedsubsequently in Section 2.3.

    4. Calculate the tness for every chromosome. The calculation of the tness is described in the following subsection.

    5. Apply the genetic operations of crossover and mutation to thepopulation.

    6. Set iters iters 1.7. If iters mod G 0 then

    (a) For i 1 . . .M do(i) Select randomly a chromosome Ri from the genetic

    population.(ii) Construct with the grammatical evolution procedure

    the corresponding neural network N Ri.(iii) Train the neural network N Ri with a local optimiza-

    tion procedure.(iv) Put the modied chromosome back to the genetic

    population.(b) End for

    8. Endif 9. If iters X K or the best chromosomes have tness value belowthe predened threshold terminate , else goto step 4.

    2.3. Neural network construction

    Every chromosome in the genetic population is a vector of integers, which is mapped through the mapping procedure of grammatical evolution into a feed-forward articial neural net-work with one hidden level and one output. The output of theconstructed neural network is a summation of different sigmoidalunits and it can be formulated as

    N x! ; p! XH

    i1 pd2i d1s X

    d

    j1 pd2i d1 j x j pd2i0@ 1A (2)

    ARTICLE IN PRESS

    Fig. 1. An example grammar.

    Table 1An example of the mapping procedure.

    String Chromosome Operation

    hexpr i 9,8,6,4,16,10,17,23,8,14 9mod 3 0(hexpr ihop ihexpr i) 8,6,4,16,10,17,23,8,14 8 mod 3 2(hterminal ihop ihexpr i) 6,4,16,10,17,23,8,14 6mod2 0(

    hxlist

    ihop

    ihexpr

    i) 4,16,10,17,23,8,14 4 mod 3

    1

    (x2 hop ihexpr i) 16,10,17,23,8,14 16 mod 4 0(x2 hexpr i) 10,17,23,8,14 10 mod 3 1(x2 hfunc ihexpr i) 17,23,8,14 17 mod 4 1(x2 cos hexpr i) 23,8,14 23 mod 3 2(x2 cos hterminal i) 8,14 8 mod 2 0(x2 cos hxlist i) 14 14 mod 3 2x2 cosx3

    I.G. Tsoulos et al. / Neurocomputing 72 (2009) 238523912386

  • 8/12/2019 Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

    3/7

    The constant d denotes the dimension of the input vector x! , the parameter H denotes the number of the processingunits (hidden nodes) of the neural network and thefunction s x is the sigmoidal function expressed by theequation

    s x 1

    1

    exp

    x

    (3)

    The BNF grammar that controls the mapping procedure is shownin Fig. 2. The sigmoidal functions used in the neural network canbe replaced by other functions including radial basis functions. Byintroducing dynamic activation functions in the neural network, anew parameter is introduced in the system. This new parametercan be either encoded in the evolutionary algorithm or it can bewrapped into another algorithm that rst selects the optimalactivation function. In such cases it is usually required to usecrossvalidation to measure how well the selected activationfunctions work. For further information regarding this matterrefer to [30] .

    2.4. Fitness evaluation

    The tness evaluation for the proposed model is performedin a similar way as in [29] using penalization. The proposedpenalty function is used to force the neural network to trainon the boundary conditions (PDEs) or the initial conditions(ODEs). The error function represents the neural networksmisclassication rate and is necessary in order to measure itsefciency.

    2.4.1. ODE caseThe method considers ODEs given in the following form:

    f x; y; y1; . . . ; yn 1; yn 0; x 2 a ; b (4)where yn denotes the nth-order derivative of y. The initialconditions are expressed in the following form:C i x; y; y1; . . . ; yn 1j xt i 0; i 1 . . .n (5)where t i is either a or b: The steps for the tness evaluation of anygiven chromosome g are:

    1. Choose T equidistant points in a ; b denoted by x0 ; x1 ; . . . ; xT 1 .2. Construct the neural network N x; g using grammatical

    evolution.3. Calculate the train error of the network using the equation

    E N g XT 1

    i0 f xi; N xi; g ; N

    1 xi; g ; . . . ;N n xi; g

    2 (6)

    4. Calculate the penalty value P N g using the followingequation:

    P N g l Xn

    k1C 2

    k x; N x; g ; N 1 x; g ; . . . ;N

    n 1 x; g j xt k (7)

    where l is a positive number.5. Calculate the nal tness value as

    V g E N x; g P N x; g (8)

    2.4.2. SODE caseThe proposed method deals with SODEs expressed in the form

    f 1 x; y1 ; y11 ; y2 ; y

    12 ; . . . ; yk; y1k 0

    f 2 x; y1 ; y11 ; y2 ; y

    12 ; . . . ; yk; y1k 0

    ..

    . ... ..

    .

    f k x; y1 ; y11 ; y2 ; y

    12 ; . . . ; yk ; y1k 0

    0BBBBBB@

    1CCCCCCA; x 2 a ; b (9)

    with the following initial conditions:

    y1a y1a y2a y2a...

    yka yka

    0BBBBB@1CCCCCA

    (10)

    The steps for the tness evaluation of any given chromosome g arethe following:

    1. Choose T equidistant points in a; b denoted by x0 ; x1 ; . . . ; xT 1 .2. Split the chromosome g into k parts and construct using

    grammatical evolution the neural networks N 1 x; g ; N 2 x; g ; . . . ;N k x; g .

    3. Calculate the train errors E N i g ; i 1 . . . k for every neuralnetwork using the equation

    E N i x; g X

    T 1

    j0 f i x j; N 1 x j; g ; N

    11 x j; g ; . . . ;N k x j; g ; N 1k x j; g

    2 (11)

    4. Calculate the penalty value for every neural networkN i x; g ; i 1 . . .k:P N i g l N ia; g yia

    2 (12)5. Calculate the total tness value

    V g Xk

    i1E N i x; g P N i x; g (13)

    2.4.3. PDE caseThe proposed method deals with PDEs of two variables with

    Dirichlet boundary conditions, without loss of generality. ThePDEs must be expressed in the form

    f x; y; C x; y; q

    q xC x; y;

    q

    q yC x; y;

    q 2

    q x2C x; y;

    q 2

    q y2C x; y ! 0

    (14)

    with x 2 a; b and y 2 c ; d . The boundary conditions are given by:

    1. C a; y f 0 y.2. C b; y f 1 y.3. C x; c g 0 x.4. C x; d g 1 x.

    ARTICLE IN PRESS

    Fig. 2. The proposed grammar.

    I.G. Tsoulos et al. / Neurocomputing 72 (2009) 23852391 2387

  • 8/12/2019 Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

    4/7

    The steps for the calculation of the tness for any chromosome g are:

    1. Choose T uniformly distributed points in the grid a; b c ; ddenoted by xi; yi ; i 0 . . .T 1.

    2. Choose B equidistant points in a ; b denoted by xbi ; i 0 . . .B 1.

    3. Choose B equidistant points in c ; d denoted by ybi ; i 0 . . .B 1 :4. Construct the neural network N x; y; g using the grammatical

    evolution procedure.5. Calculate the train error of the neural network N x; y; g :

    E N x; y; g XT 1

    i0 f x; y; N x; y; g ;

    q

    q xN x; y; g ;

    q

    q yN x; y; g ;

    q 2

    q x2 N x; y; g ;

    q 2

    q y2 N x; y; g !

    2

    (15)

    6. Calculate the penalty quantities

    P 1N x; y; g l

    XB 1

    i0N a; ybi ; g f 0 ybi2

    P 2N x; y; g l XB 1

    i0N b; ybi ; g f 1 ybi

    2

    P 3N x; y; g l XB 1

    i0N xbi ; c ; g g 0 xbi

    2

    P 4N x; y; g l XB 1

    i0N xbi ; d; g g 1 xbi

    2 (16)

    where l is a positive real number.7. Calculate the total tness with

    V g E N x; y; g P 1N x; y; g P 2N x; y; g P 3N x; y; g P 4N x; y; g (17)

    3. Experiments

    The proposed method was tested on a series of ODEs, non-linear ODEs, SODEs and PDEs with two variables and Dirichletboundary conditions. These test functions are listed subsequentlyand they have been used in the experiments performed in [20,29] .In all the experiments, the ODEs were sampled using a uniformdistribution and only 1000 samples were extracted in theintervals of x for each case. In the following subsections, theODEs are presented for each one of the four series.

    3.1. Linear ODEs

    ODE1:

    y0 2 x y

    xwith y1 3 and x 2 1 ; 2 . The analytical solution is y x x 2= x.

    ODE2:

    y0 1 y cos x

    sin xwith y1 3=sin 1 and x 2 1; 2 . The analytical solution is y x x 2= sin x.

    ODE3:

    y00 6 y0 9 ywith y0 0 and y00 2 and x 2 0; 1 . The analytical solution is y x 2 xexp 3 x.

    ODE4:

    y00 15

    y0 y 15

    exp x5 cos x

    with y0 0 and y1 sin 0:1=exp 0:2 and x 2 0; 1 . Theanalytical solution is y x exp x=5sin x.

    ODE5:

    y00 1 x y0 1 x cos x 0with y0 0; y00 1 and x 2 0; 1 : The solution is given by y x Z

    x

    0

    sin t t

    dt

    ODE6:

    y00 2 xy 0with y0 0; y00 1 and x 2 0; 1 : The solution is given by y x Z

    x

    0exp t 2d t

    ODE7:

    y00 x2 1 2 xy x2 1 0with y0 0; y00 1 and x 2 0; 1 : The analytical solution is y x x2 1arctan x.

    3.2. Non-linear ODEs

    NLODE1:

    y00 12 y

    with y1 1 and x 2 1 ; 4 . The analytical solution is given by y x ffiffiffi xp .NLODE2: y0

    2 log y cos 2 x 2cos x 1 log x sin x 0

    with y1 1 sin 1 and x 2 1; 2 . The analytical solution is y x x sin x.

    NLODE3:

    y00 y0 4 x3

    with y1 0 and x 2 1; 2 : The analytical solution is given by y x log x2.

    NLODE4:

    x2 y00 xy02

    1log x

    0

    with y

    e

    0; y0

    e

    1=e and x

    2 e; 2e : The analytical solution is

    y x loglog x.

    3.3. Systems of ODEs

    SODE1:

    y01 cos x y21 y2 x2 sin2 x

    y02 2 x x2 sin x y1 y2with y10 0; y20 0 and x 2 0; 1 : The analytical solution isgiven by: y1 x sin x; y2 x x2 .

    SODE2:

    y01

    cos x sin x y2 y02 y1 y2 exp x sin x

    ARTICLE IN PRESS

    I.G. Tsoulos et al. / Neurocomputing 72 (2009) 238523912388

  • 8/12/2019 Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

    5/7

    with y10 0; y20 1 and x 2 0; 1 . The analytical solution isgiven by: y1 x sin x=exp x; y2 x exp x.

    SODE3:

    y01 cos x y02 y1 y03 y2 y04 y3 y05 y4with y10 0; y20 1; y30 0; y40 1 ; y50 0 and x 20; 1 : The analytical solution is given by: y1 x sin x; y2 x cos x; y3 x sin x; y4 x cos x; y5 x sin x.

    SODE4:

    y01 1 y2

    sin exp x y02 y2with y10 cos1 :0; y20 1 :0 and x 2 0; 1 :The analyticalsolution is given by: y1 x cosexp x; y2 x exp x.

    3.4. PDEs

    PDE1:

    r 2 C x; y exp x x 2 y3 6 ywith x 2 0; 1 ; y 2 0; 1 and the boundary conditions: C 0; y y3 ; C 1 ; y1 y3exp 1; C x; 0 xexp x; C x; 1 x 1exp x. The analytical solution is given by: C x; y x y3exp x.

    PDE2:

    r 2 C x; y 2C x; y

    with x 2 0; 1 ; y 2 0; 1 : The associated boundary conditionsare: C 0; y 0; C 1 ; y sin 1cos y; C x; 0 sin x; C x; 1 sin x cos1. The analytical solution is given by: C x; y sin xcos y.

    PDE3:

    r 2 C x; y 4with x 2 0; 1 ; y 2 0; 1 : The boundary conditions are: C 0; y y2 y 1 ; C 1 ; y y2 y 3; C x; 0 x2 x 1 ; C x; 1 x2 x 3 : The analytical solution is given by: C x; y x2 y2 x y 1.

    PDE4:

    r 2 C x; y x 2exp x x exp ywith x 2 0; 1 ; y 2 0; 1 . The boundary conditions are givenby: C 0; y 0; C 1 ; y sin y; C x; 0 0; C x; 1 sin x. Theanalytical solution is given by: C x; y sin xy. 3.5. Experimental results

    The method was performed 30 times, using different seeds forthe random number generator each time, on every differentialequation described previously and averages were taken. In Table 2the numerical values for the parameters of the algorithm arelisted. The numerical values for the majority of these parameterswere taken from the papers of Lagaris et al. [20] and Tsoulos et al.[30] and some of them have been found experimentally. The localoptimization procedure used in the experiments was a BFGSvariant due to Powell [39] . In Table 3 the results from theapplication to the test functions of the proposed method are

    listed. The column TRAIN ERROR denotes the average per pointerror of the proposed method to the T points of the training set,

    the column TEST ERROR denotes the average per point ERROR of the proposed method to the points belonging to the test setand the column GENERATIONS denote the average number of therequired generations of the genetic algorithm. For the cases of ODEs and SODEs the test set had 1000 equidistant points and forthe case of PDEs the test set had 10 000 uniformly distributedpoints. In Fig. 3 we can observe the progress of solution for an

    example run for ODE1. As we can notice, the proposed methodmanaged in 60 generations to solve the objective problem. Also inFig. 4, the application of the nal solution in range [1:3] is plottedagainst the true solution f x x 2= x. As we can see, the nalsolution maintains its equality even outside the training domain.Furthermore, in Table 4 we list the experimental results for thesame problem using different values of the parameter l . As it canbe noticed, the method succeeds in solving the problem even forsmall values of the critical parameter l . As we can notice from thecolumn GENERATIONS of Table 3 the proposed method managedto solve all ODEs in very little time, if time is expressed as thenumber of required generations. The proposed method is acombination of two types of algorithms: a stochastic (geneticalgorithm) and a deterministic one. This fact raises some stability

    issues so, in order to overcome this, each experiment ran for30 times with a different seed in order to diminish any randomvariations in the experimental results. The stability issues mostlyrefer to the stochastic nature of the proposed algorithm. Thismeans that it does not always converge to the same local

    ARTICLE IN PRESS

    Table 2The numerical values for the parameters of the method.

    Name Value

    S 500K 2000 pc 0.9 pm 0.05

    106

    l 100G 20M 20T 100B 10

    Table 3Experimental results of the proposed method.

    Problem Train error Test error Generations

    ODE1 1:6 10 8 1:5 10 8 111ODE2 5:0 10 8 1:5 10 8 78ODE3 4:4 10 9 4:2 10 9 43

    ODE4 1:0 10 9 1:0 10 9 189ODE5 1:9 10 8 2:0 10 8 119ODE6 4:7 10 8 4:7 10 8 65ODE7 2:3 10 8 2:5 10 8 913NLODE1 3:2 10 8 3:1 10 8 69NLODE2 1:1 10 8 1:1 10 8 405NLODE3 1:2 10 6 1:2 10 6 842NLODE4 1:1 10 7 1:1 10 7 289SODE1 2:3 10 5 2:1 10 5 1422SODE2 2:8 10 6 2:7 10 6 1078SODE3 2:1 10 5 2:3 10 5 1136SODE4 5:8 10 7 5:9 10 7 827PDE1 3:9 10 7 1:6 10 7 1034PDE2 9:6 10 8 5:1 10 8 996PDE3 8:5 10 8 4:7 10 8 189PDE4 1:9 10 6 1:1 10 6 1267

    I.G. Tsoulos et al. / Neurocomputing 72 (2009) 23852391 2389

  • 8/12/2019 Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

    6/7

    minimum. That is why in the experimental setup each experimentran for 30 times with different seeds.

    4. Conclusions

    In conclusion, a novel method for solving ODEs, PDEs andSODEs is presented. This method utilizes neural networks that areconstructed using articial neural networks that are constructedusing grammatical evolution. This novel method for simulta-neously constructing and training neural networks has been usedsuccessfully in other domains. Concerning the differential equa-

    tions problem, a series of experiments in 19 well-knownproblems, showed that the proposed method managed to solve

    all the problems. Although a number of methods for solvingdifferential equations exist, the proposed one has very littleexecution time and does not require the user to enter anyparameters. The main advantages of the proposed method are thefollowing:

    1. The user is only required to sample the differential equationsin order to create the train/test les.

    2. The method is general enough to be applied to ODEs, SODEsand PDEs.

    3. The nal solution is expressed in a closed analytical form(neural network form) which is a differentiable form.

    4. The nal solution provides good generalization abilities, evenoutside the domain of the differential equation.

    5. The proposed method is based on genetic algorithms, whichmeans that it can be easily parallelized.

    6. The proposed method can be extended by using different BNFgrammars for the constructed neural networks with differenttopologies (such as recurrent neural networks) or differentactivation functions.

    7. The method can be extended to solve higher order differentialequations, a task that is currently being researched.

    Acknowledgements

    All the experiments of this paper were conducted at theResearch Center of Scientic Simulations of the University of Ioannina, which is composed of 200 computing nodes with dualCPUs (AMD OPTERON 2.2GHZ 64bit) running Redhat EnterpriseLinux.

    References

    [1] A.R. Its, A.G. Izergin, V.E. Korepin, N.A. Slavnov, Differential equations for

    quantum correlation functions, International Journal of Modern Physics B 4(1990) 10031037.[2] A.V. Kotikov, Differential equations method: the calculation of vertex-type

    Feynman diagrams, Physics Letters B 259 (1991) 314322.[3] H. Gang, H. Kaifen, Controlling chaos in systems described by partial

    differential equations, Phys. Rev. Lett. 71 (1993) 37943797.[4] C.J. Budd, A. Iserles, Geometric integration: numerical solution of differential

    equations on manifolds, philosophical transactions: mathematical, Physicaland Engineering Sciences: Mathematical 357 (1999) 945956.

    [5] Y.Z. Peng, Exact solutions for some nonlinear partial differential equations,Physics Letters A 314 (2003) 401408.

    [6] J.G. Verwer, J.G. Blom, M. van Loon, E.J. Spee, A comparison of stiff ODE solversfor atmospheric chemistry problems, Atmospheric Environment 30 (1996)4958.

    [7] J. Behlke, O. Ristau, A new approximate whole boundary solution of the Lammdifferential equation for the analysis of sedimentation velocity experiments,Biophysical Chemistry 95 (2002) 5968.

    [8] U. Salzner, P. Otto, J. Ladik, Numerical solution of a partial differentialequation system describing chemical kinetics and diffusion in a cell with theaid of compartmentalization, Journal of Computational Chemistry 11 (1990)194204.

    [9] R.V. Culshaw, S. Ruan, A delay-differential equation model of HIV infection of CD4+ T-cells, Mathematical Biosciences 165 (2000) 2739.

    [10] G.A. Bocharov, F.A. Rihan, Numerical modelling in biosciences using delaydifferential equations, Journal of Computational and Applied Mathematics125 (2000) 183199.

    [11] R. Norberg, Differential equations for moments of present values in lifeinsurance, Insurance: Mathematics and Economics 17 (1995) 171180.

    [12] J.C. Butcher, The Numerical Analysis of Ordinary Differential Equations:RungeKutta and General Linear Methods, Wiley-Interscience, New York, NY,USA, 1987.

    [13] J.G. Verwer, Explicit RungeKutta methods for parabolic partial differentialequations, Applied Numerical Mathematics 22 (1996) 359379.

    [14] A. Wambecq, Rational RungeKutta methods for solving systems of ordinarydifferential equations, Computing 20 (1978) 333342.

    [15] J. Douglas, B.F. Jones, On predictorcorrector methods for nonlinear parabolicdifferential equations, Journal of the Society for Industrial and AppliedMathematics 11 (1963) 195204.

    [16] R.W. Hamming, Stable predictorcorrector methods for ordinary differentialequations, Journal of the ACM 6 (1959) 3747.

    ARTICLE IN PRESS

    0

    2

    4

    6

    8 10

    12

    14

    16

    18

    0 10 20 30 40 50 60

    B E S T F I T N E S S

    GENERATIONS

    PROGRESS OF SOLUTION

    PROGRESS

    Fig. 3. The progress of solution for an example run for ODE1.

    2.8

    2.9

    3

    3.1

    3.2

    3.3

    3.4

    3.5

    3.6

    3.7

    1 1.5 2 2.5 3

    y

    x

    THE QUALITY OF SOLUTION

    PROPOSED METHODf(x) = x+2/x

    Fig. 4. The quality of a trial solution for ODE1.

    Table 4Solving the ODE1 problem for different values of parameter l .

    l Train error Test error Generations

    10 2:4 10 9 2:5 10 9 100100 1:6 10 8 1:5 10 8 1111000 2:2 10 10 2:3 10 9 100

    I.G. Tsoulos et al. / Neurocomputing 72 (2009) 238523912390

  • 8/12/2019 Resolver Ecuaciones Diferenciales Con Construido Redes Neuronales

    7/7

    [17] J.D. Lambert, Numerical Methods for Ordinary Differential Systems: TheInitial Value Problem, Wiley, Chichester, England, 1991.

    [18] G.E. Fasshauer, Solving differential equations with radial basis functions:multilevel methods and smoothing, Advances in Computational Mathematics11 (1999) 139159.

    [19] C. Franke, R. Schaback, Solving partial differential equations by collocationusing radial basis functions, Applied Mathematics and Computation 93(1998) 7382.

    [20] I.E. Lagaris, A. Likas, D.I. Fotiadis, Articial neural networks for solving

    ordinary and partial differential equations, IEEE Transactions on NeuralNetworks 9 (1998) 9871000.[21] P. Ramuhalli, L. Udpa, S.S. Udpa, Finite-element neural networks for solving

    differential equations, IEEE Transactions on Neural Networks 16 (2005)13811392.

    [22] D.R. Parisi, M.C. Mariani, M.A. Laborde, Solving differential equations withunsupervised neural networks, Chemical Engineering and Processing: ProcessIntensication 42 (2003) 715721.

    [23] L. Jianyu, L. Siwei, Q. Yingjian, H. Yaping, Numerical solution of differentialequations by radial basis function neural networks, in: Proceedings of theInternational Joint Conference on Neural Networks, vol. 1, 2002, pp. 773777.

    [24] S. He, K. Reif, R. Unbehauen, Multilayer neural networks for solving a class of partial differential equations, Neural Networks 13 (2000) 385396.

    [25] F. Puffer, R. Tetzlaff, D. Wolf, Learning algorithm for cellular neural networks(CNN) solving nonlinear partial differential equations, in: ConferenceProceedings of the International Symposium on Signals, Systems andElectronics, 1995, pp. 501504.

    [26] A.J. Meade Jr., A.A. Fernandez, Solution of nonlinear ordinary differentialequations by feedforward neural networks, Mathematical and Computer

    Modelling 20 (1994) 1944.[27] A.J. Meade Jr., A.A. Fernandez, The numerical solution of linear ordinary

    differential equations by feedforward neural networks, Mathematical andComputer Modelling 19 (1994) 125.

    [28] G. Burgess, Find approximate analytic solutions to differential equationsusing genetic programming, Surveillance Systems Division, Electronics andSurveillance Research Laboratory, Department of Defense, Australia, 1999.

    [29] I.G. Tsoulos, I.E. Lagaris, Solving differential equations with genetic program-ming, Genetic Programming and Evolvable Machines 7 (2006) 3354.

    [30] I. Tsoulos, D. Gavrilis, E. Glavas, Neural network construction and trainingusing grammatical evolution, Neurocomputing 72 (2008) 269277.

    [31] M. ONeill, C. Ryan, Grammatical evolution, IEEE Transactions on EvolutionaryComputation 5 (2001) 349358.

    [32] S.E. Fahlman, C. Lebiere, The cascade-correlation learning architecture,Advances in Neural Information Processing Systems 2 (1990) 524532.

    [33] J.R. Koza, Genetic Programming: On the programming of Computer by Meansof Natural Selection, MIT Press, Cambridge, MA, 1992.

    [34] M. ONeill, C. Ryan, Grammatical Evolution: Evolutionary Automatic Pro-gramming in a Arbitrary Language, Genetic Programming, vol. 4, KluwerAcademic Publishers, Dordrecht, 2003.

    [35] C. Ryan, M. ONeill, J.J. Collins, Grammatical evolution: solving trigonometricidentities, in: Proceedings of Mendel 1998: 4th International MendelConference on Genetic Algorithms, Optimization Problems, Fuzzy Logic,Neural Networks, Rough Sets, Brno, Czech Republic, Technical University of Brno, Faculty of Mechanical Engineering, June 2426 1998, pp. 111119.

    [36] J. Collins, C. Ryan, Automatic Generation of Robot Behaviors usingGrammatical Evolution, in: Proceedings of AROB 2000, the Fifth InternationalSymposium on Articial Life and Robotics.

    [37] M. ONeill, C. Ryan, Automatic generation of caching algorithms, in: K.Miettinen, M.M. Mkel, P. Neittaanmki, J. Periaux (Eds.), Evolutionary

    Algorithms in Engineering and Computer Science, Jyvskyl, Finland, 30May3 June 1999, Wiley, New York, 1999, pp. 127134.

    [38] A. Brabazon, M. ONeill, A grammar model for foreign-exchange trading, in:H.R. Arabnia, et al. (Eds.), Proceedings of the International conference onArticial Intelligence, vol. II, CSREA Press, 2326 June 2003, pp. 492498.

    [39] M.J.D. Powell, A tolerant algorithm for linearly constrained optimizationcalculations, Mathematical Programming 45 (1989) 547566.

    Ioannis Tsoulos received his Ph.D. degree from theDepartment of Computer Science of the University of Ioannina in 2006. He is currently a Visiting Lecturer atthe Department of Communications, Informatics andManagement of the Technological Educational Instituteof Epirus. His research interest areas include optimiza-tion, genetic programming and neural networks.

    Dimitris Gavrilis was born in Athens in 1978. Hegraduated from Electrical and Computer EngineeringDepartment in 2002. In 2007 he received his Ph.D. inDenial-of-Service attacks detection. His researchinterests include: network intrusion detection, featureselection and construction for pattern recognition,evolutionary neural networks. He currently works asa researcher at Athena Research Centre.

    Euripidis Glavas received his Ph.D. degree from theUniversity of Sussex in 1988. He is currently anassociate professor in the Department of Communica-tions, Informatics and Management of the Technologi-cal Educational Institute of Epirus. His research areasinclude optical wave guide, computer architecture andneural networks.

    ARTICLE IN PRESS

    I.G. Tsoulos et al. / Neurocomputing 72 (2009) 23852391 2391