+ All Categories
Home > Documents > MÉTODOS DE RESOLUCIÓN

MÉTODOS DE RESOLUCIÓN

Date post: 31-Dec-2015
Category:
Upload: sawyer-stephens
View: 37 times
Download: 0 times
Share this document with a friend
Description:
MÉTODOS DE RESOLUCIÓN. Ecuaciones Algebraicas. Lineales. No lineales. Metodos Numericos. Interval Halving. False Position. Ridder. Succesive Substitution. Secant. Muller. Wegstein. Newton Raphson. Metodos Analiticos. Brent. Broyden. Homotopy. Dogleg step. Hook step. - PowerPoint PPT Presentation
Popular Tags:
25
o de modelización y simulación de procesos. ETSII. Manuel Rodríguez MÉTODOS DE RESOLUCIÓN MÉTODOS DE RESOLUCIÓN Ecuaciones Algebraicas Lineales No lineales Interval Halving Succesive Substitution False Position Newton Raphson Wegstein Broyden Metodos Analiticos Metodos Numericos Secant Ridder Brent Muller Dogleg step Hook step For multidimensional problems Homotopy
Transcript
  • Systems of Linear Algebraic Equations

    General Representation

    as are constant coefficients, bs are constants, x are the unknown variables, and n is the number of equations

  • Analysis of System LAE

    Consistent, independent equationsIf equations are not contradictory and not simply multiplies of the other, then solution to LAE problems will be unique.Example:

    Inconsistent equationsIf equations are contradictory, then no solution exists.Example:

  • Analysis of System LAE (2)

    Dependent equationsIf equations are multiplies of others, many solutions can be foundExample:

    Ill-conditioned equationsIf a small change in any of the coefficients will change the solution set from unique to either infinite or empty, the system of LAE is called to be ill-conditioned.Numerical solution will be difficultExample:

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas lineales

  • Gauss Elimination - Concept

    If two equations have a point in common, then that point also satisfies any linear combination of the two equations. Suppose that are the solutions to linear equations

    then also satisfies the linear combination of these linear equations:

  • Gauss Elimination Algorithm

    Add a multiple of row onto row to form a new row

    Repeat (1) until we get an upper (or lower) triangular matrix of LAE coefficients;

    Apply back substitution to solve for each variable

    Step 1: Forward elimination until we get the upper (or lower) triangular matrix of LAE coefficientsStep 2: Backward substitution

  • Pitfalls of the Gauss Elimination

    When does the Gauss elimination get solution? When inverse of exists

    When equations are consistent and independent Satisfy the following rank condition:

    Limitation of algorithmWhen pivot element equals zeroWhen pivot element is significantly smaller than the coefficient being used to eliminate (or Ill-conditioned)Round-off errors

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas lineales

  • LU Decomposition - Concept

    A matrix A is decomposed (or factorised) into lower and upper matrix factors, for example if A is a 3x3 matrix:

    Methods to find LU factorsGauss elimination (Doolittle form)Direct computation Crout form ( )Cholesky form ( )

  • Motivation for LU Decomposition

    Solve the LAE Problems in a more efficient way

    Algorithm for solving an LAE Problem using the LU Decomposition

    Find the LU factor of the A matrix: A = LUAlgebraic problem is transformed from Ax = b to LUx = b Define y = UxSolve for y using the forward substitutionSolve for x using the backward substitution

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    Ecuaciones algebraicas linealesMTODOS DE RESOLUCIN

  • Step 2: Rows 1 and 2 are unchanged. Rows 3 and 4 are transformed by

    to yield:

    Step 3: The fourth row is modified to complete the forward eliminating stage by:

    to yield:

    Hence, the LU factors are:

  • LU Decomposition using Gauss Elimination

    Suppose we try to find LU factors of the following matrix:

    Initialization: Let U = A and

    Step 1: Row 1 is unchanged, and rows 2-4 are modified by:to give:

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas lineales

  • LU Decomposition using Crout Method

    Based on equating the elements of the product with the corresponding elements of , where the diagonal elements of are 1s . For a 3 x 3 matrix

    AlgorithmStep 1: Set , then solve for the remaining elements in the first row of and the first column of . Step 2: Find , then solve for the remainder of the second row of and the second column of Continue for the rest

  • General Formula for Crouts Method

    General formula for the first column of L, the first row of U and the diagonal element of U

    General formula for the j-th row of L and the j-th column of U:For j= 2, 3, , n-1; i = j, j+1, , n; and k = j+1, j+2, , n

    General formula for the last row of L

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas lineales

  • LU Decomposition using Choleskys Method

    The matrix is symmetric and positive definite.

    For a 3 x 3 matrix:

    Algorithm:Find , then solve for the remaining elements in the first row of and the first column of .Find , then solve for the remainder of the second row of and the second column of Continue for the rest.

  • Generalization of Choleskys Method

    For the k-th row

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas lineales

  • Variations of Algorithm

    Jacobi MethodBased on the transformation of to , in which the matrix has zeros on the diagonal. The vector is updated using the previous estimate for all components of to evaluate the right hand side of the equation (simultaneous updating).Gauss-Seidel MethodBased on the transformation of to , in which the matrix has zeros on the diagonal as in Jacobi method, but each component of the vector is updated immediately as each iteration progresses (sequential updating). Successive Over Relaxation (SOR) MethodLike Gauss-Seidel method, but we introduce an additional parameter , that may accelerate the convergence of iteration. For , the method is known as successive under relaxation, while for is known as successive over relaxation.

  • Iterative Methods: Concept

    Basic IdeaTo solve the i-th equation in the system for the i-th variable by converting the given system to be suitable for finding the solution in the iterative manner.

    AlgorithmTransform to so that we can solve iteratively for as .

    Stopping ConditionsTo stop the iterations when the norm of the change in the solution vector from one iteration to the next is sufficiently small.To stop the iterations when the norm of the residual vector is below a specified tolerance.

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    Ecuaciones algebraicas linealesMTODOS DE RESOLUCIN

  • SOR Method

    Example

    SOR Iterative Scheme

    It can be seen that if , it becomes the iterative scheme for Gauss-Seidel method. The parameter is selected to improve the convergence of algorithm.

  • Implementation for Different Methods

    Jacobi MethodIterative Scheme:

    Initial condition/guess:Stopping criteriaGauss-Seidel MethodIterative Scheme

    Initial condition/guessStopping criteria

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Problem Statement

    Given f(x) as nonlinear (algebraic or not) function in x, find the value of x, say x*, that makes f(x*) ~ 0.x* is sometimes called a zero or root of f(x).x* can be sought analytically (exact solution) or numerically (approximate solution)

    General consideration on the selection of numerical methodsIs this a special function that will be evaluated often?How much precision needed?How fast and robust must the method be?Is the function a polynomial?Does the function have singularities?There is no single root-finding method that is best for all situation

  • Problem Statement

    Given f(x) as nonlinear (algebraic or not) function in x, find the value of x, say x*, that makes f(x*) ~ 0.x* is sometimes called a zero or root of f(x).x* can be sought analytically (exact solution) or numerically (approximate solution)

    General consideration on the selection of numerical methodsIs this a special function that will be evaluated often?How much precision needed?How fast and robust must the method be?Is the function a polynomial?Does the function have singularities?There is no single root-finding method that is best for all situation

  • Problem Statement

    Given f(x) as nonlinear (algebraic or not) function in x, find the value of x, say x*, that makes f(x*) ~ 0.x* is sometimes called a zero or root of f(x).x* can be sought analytically (exact solution) or numerically (approximate solution)

    General consideration on the selection of numerical methodsIs this a special function that will be evaluated often?How much precision needed?How fast and robust must the method be?Is the function a polynomial?Does the function have singularities?There is no single root-finding method that is best for all situation

  • Problem Statement

    Given f(x) as nonlinear (algebraic or not) function in x, find the value of x, say x*, that makes f(x*) ~ 0.x* is sometimes called a zero or root of f(x).x* can be sought analytically (exact solution) or numerically (approximate solution)

    General consideration on the selection of numerical methodsIs this a special function that will be evaluated often?How much precision needed?How fast and robust must the method be?Is the function a polynomial?Does the function have singularities?There is no single root-finding method that is best for all situation

  • Problem Statement

    Given f(x) as nonlinear (algebraic or not) function in x, find the value of x, say x*, that makes f(x*) ~ 0.x* is sometimes called a zero or root of f(x).x* can be sought analytically (exact solution) or numerically (approximate solution)

    General consideration on the selection of numerical methodsIs this a special function that will be evaluated often?How much precision needed?How fast and robust must the method be?Is the function a polynomial?Does the function have singularities?There is no single root-finding method that is best for all situation

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Basic strategy of root-finding

    Select a range of xPlot the function f(x) versus xFind the intersection of the function f(x) with the x-axisIf no intersection is found, select another range of x and repeat Step 2-3.

    f(x)

    x

    x1

    x2

    x*

    Feasible range

    Infeasible range

    x3

    f(x1)

    f(x2)

    Graphical Method

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Bracketing Methods

    Principle:A function typically changes sign in the vicinity of a root.A root is bracketed on the interval [a,b], if f(a) and f(b) have the opposite sign.However a sign change occurs for singularities as well as rootsCommon Usage:Bracketing methods are used to make initial guesses at the roots, not to accurately estimate the values of the roots.

    A simple test for sign changeAlternative 1: f(a) f(b) < 0 ?Alternative 2: Use built in sign function: sign(fa) ~= sign(fb) ?

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Bisection Algorithm

    Select an interval [a,b] in which the location of zero is detected Apply the sign test procedure (i.e. {f(a) * f(b) < 0} or {sign(f(a)) ~= sign(f(b))} )Calculate a mid-point as the next estimate of the zero

    Apply the sign test to determine a feasible interval for the new estimate of the zeroTest the function whether it changes sign for the candidates of new interval, [a,m] or [m,b] .If f(a)*f(m) = 0, the root equals m; terminate the computation Repeat step 2-3 for the feasible interval

  • Bisection Method

    a

    b

    f(x)

    x

    Mid-point

    Next estimate of Bisection

    f(a)

    f(b)

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • False-position Algorithm

    Select an interval [a,b] in which the location of zero is detected Apply the sign test procedure (i.e. {f(a) * f(b) < 0} or {sign(f(a)) ~= sign(f(b))} )Calculate an intersection point as the next estimate of the zero

    Apply the sign test to determine a feasible interval for the new estimate of the zeroTest the function whether it changes sign for the candidates of new interval, [a,m] or [m,b] .If f(a)*f(m) = 0, the root equals m; terminate the computation Repeat step 2-3 for the feasible interval

  • False-Position Method

    a

    b

    f(x)

    x

    Intersection point

    Next estimate of False-position

    f(a)

    f(b)

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Convergence Rate

    Number of iterations

    Relative Errors

    False-position method

    Bisection method

    10

    1

  • Comparisons of Bisection and False-position

    Similarity:Both methods make use of two initial guesses of the root (lower and upper bound)Require a sign test procedure to determine the feasible range for searching the rootAlways converge to the true value with certain tolerance.Difference:Updating the new estimate of the root using different formula/strategyBisection: use the mid-point strategyFalse-position: use the triangular principle (or a linear interpolation)In general, false position method converges faster than the bisection method. However, for some special functions, the false-position method may be very slow to converge.

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Open Methods: Principles

    Use a functional approximation (i.e. straight-line, quadratic or polynomial) to generate new estimate of the root we wish to find

    MethodsFixed-point IterationNewton-Raphson method (a straight-line approximation using gradient information)Secant method (a straight-line approximation of two points)Mullers Method (a quadratic approximation of three points)

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    Ecuaciones algebraicas no linealesMTODOS DE RESOLUCIN

  • Bracketing vs Open Methods

    Bracketing methodsThe root is located within a lower and upper bound (or prescribed interval), and then the estimate is updated to eventually always converge to the true root with a certain tolerance.

    Open methodsUse a single starting point (or two starting points that do not necessarily bracket the root) and a formula to find the root, and (hopefully) end up to the true value.These methods sometimes diverge or move away from the true root, but when these methods converge, they usually do so much more quickly than the bracketing methods

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Fixed-point Iteration

    Transform f(x) to the form of: x = g(x)Select initial estimate of the root, xo , and set i = 0Apply a fixed point iteration to calculate the next estimate of the root as follows:

    Repeat step 3 for i = 1, 2, . until satisfying the convergent criteria ( is a specified tolerance)

    Problem of finding the root: f(x) = 0

  • Converge

    Diverge

  • Convergent Analysis

    The fixed point iteration may or may not be convergent, depending upon the nature of g(x)If |g(x)| < 1, the fixed point algorithm will linearly converge to the true root If |g(x)| 1, the fixed point algorithm will diverge

    To guarantee the convergence, add a checking step into the algorithmFor example:

    dg = derg(x);% derivative of g(x)if abs(dg(x)) < 1,Apply the fixed point iterationelse Fixed point iteration divergesend

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Newton-Raphson

    f(x)

    x

    Next estimate of Newton-Raphson

    Tangent-line

    xi

    xi+1

  • Newton-Raphson Algorithm

    Select an initial estimate, xo of the root Calculate f(xo) and f /(xo)Apply a tangent-line approximation with a gradient information to calculate the next estimate of the zero

    Repeat step 3 for k = 1,2, until satisfying the convergent criteria

    OR

  • Characteristics

    Only require a single initial estimate. But need to know the derivative of the function.Usually converge faster (quadratically convergent!). But there may hit a singular point when f(xk) = 0.As the fixed point iteration, the convergence of Newton-Raphson method depends on the nature of the function and initial guess.Initial guess is expected to sufficiently close to the true valueGood initial guess is helped byPhysical understanding on the problem, orGraphical method

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Secant method

    f(x)

    x

    Straight-line

    Intersectionpoint

    Next estimate of Secant

    xi

    xi+2

    xi+1

  • Secant Algorithm

    Select two initial estimates xo and x1 Calculate the function values at those points f(xo) and f(x1) Apply a straight-line approximation passing through two points to calculate the next estimate of the zero

    4.Repeat step 3 for k = 2, 3, . until satisfying the convergent criteria

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    Ecuaciones algebraicas no linealesMTODOS DE RESOLUCIN

  • Comparison with False-position method

    Require two initial guessesBut the points does not need to bracket the rootThe estimate of the root at the second iteration is different due t the nature of the derivative approximation

    Secant method

    False-position method

  • Comparison with Newton-Raphson method

    Does not require the derivative of the functionThe secant method is suitable if we do not know the function (just have the data points)Require two initial guessesApproximate the derivative of the function using the two data points:

  • At the first iteration, both methods produce the same estimate. Afterthat, each method has a different estimate.

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • Difficulties with multiple root problems

    The fact that the function does not change sign at even multiple roots. So, the bracketing strategy cannot be applied!The fact that not only f(x), but also f(x), goes to zero at the root. Possible problems for Newton-Raphson and Secant methods!SOLUTION: Use the modified Newton-Raphson methodsAlternative one: If we have m multiple roots, the updated estimate follows:

    Alternative two: Use the ratio of the function to its derivative, and then apply the Newton-Raphson formula for this ratio function

  • Systems of nonlinear algebraic equation

    Find x that solves for the following set of equations

    Or in the compact form

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    Ecuaciones algebraicas no linealesMTODOS DE RESOLUCIN

  • Newton-Raphson Method

    Use the Jacobian (matrix of partial derivatives) of the system; At each stage of the iteration process, an updated approximate solution vector is found from the current approximate solution according to the equationwhere is the Jacobian matrix at points defined as

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • f(x)

    x

    x2

    Next estimate of Muller

    Parabolic

    Mullers Method

    x0

    x1

  • Mullers Algorithm

    Select three points [xo,f(xo)]; [x1,f(x1)] and [x2,f(x2)] Compute the coefficients a, b and cApply a quadratic formula to calculate the next estimate of the zero

    Repeat step 2-3 until satisfying the convergent criteria

    Real and complex rootCan be located

  • Which point is discarded in Muller Algorithm

    Two general strategies are typically used:

    If only real roots are being located, we choose the two original points that are nearest the new root estimate, x3

    If both real and complex roots are being evaluated, a sequential approach is employed. That is just like the secant method, x1, x2 and x3 take place of x0, x1 and x2.

    Curso de modelizacin y simulacin de procesos. ETSII. Manuel Rodrguez

    MTODOS DE RESOLUCINEcuaciones algebraicas no lineales

  • MATLAB Built in Function (1)

    The built in fzero function (Optimization Toolbox) is a hybrid method that combines bisection, secant and reverse quadratic interpolation to a find the root of f(x) = 0

    Syntax:

    x = fzero(fun,x0)

    X0 can be scalar or a two element vectorIf x0 is a scalar, fzero tries to create its own bracketIf x0 is a two element vector, fzero uses the vector as a bracket

  • MATLAB Built in Function (2)

    The built in fsolve function (Optimization Toolbox) is an M-file function to solve a system of nonlinear equations:

    Syntax:

    x = fsolve(fun,x0)

    x0 is a vector of initial estimate of the roots


Recommended