The Treasury

Global Navigation

Personal tools

Treasury
Publication

Discrete Hours Labour Supply Modelling: Specification, Estimation and Simulation - WP 03/20

Appendix: An iterative solution procedure

Suppose a function with first derivative , needs to be maximised with regard to . To find the maximum, the first order condition needs to be satisfied. Most iterative methods are based on some form of Newton’s method. Consider finding the root of the equation where takes the form shown in Figure 4. Take an arbitrary starting point, and draw the tangent, with slope .

Figure 4 – Newton’s method
Figure 4 – Newton’s method.

By approximating the function by the tangent, the new value is given by the point of intersection of this tangent with the axis, at . It can be seen that selecting as the next starting point and drawing the tangent in this new point on with slope leads quickly to the required root. From the triangle in Figure 4, it can be seen that:

(39)    

Hence, starting from , the sequence of iterations follows:

(40)    

until convergence is reached, when with depending on the accuracy required. This clearly works best when the function is nice and smooth, and it is necessary to check (by picking different starting points) that there are not multiple roots, in which case convergence could be at a local rather than a global maximum. In addition, the second derivative needs to be negative in the maximum.

In the present context, Newton’s method is easily adapted to deal with a vector of parameters. An iterative method involves repeatedly solving the following matrix equation, where now denotes the vector of parameters in the th iteration:

(41)    

and the first and second derivatives are evaluated using the parameters . Furthermore, it can be shown that the inverse of the matrix of second derivatives at the final iteration provides an estimate of the variance-covariance matrix of parameter estimates.

Page top