Log in

Object Oriented (Dynamic) Programming: Closing the “Structural” Estimation Coding Gap

  • Published:
Computational Economics Aims and scope Submit manuscript

Abstract

This paper discusses how to design, solve and estimate dynamic programming models using the open source package niqlow. Reasons are given for why such a package has not appeared earlier and why the object-oriented approach followed by niqlow seems essential. An example is followed that starts with basic coding then expands the model and applies different solution methods to finally estimate parameters from data. The niqlow approach is used to organize the empirical DP literature differently from traditional surveys which may make it more accessible to new researchers. Features for efficiency and customization are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Germany)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Recent reviews of the literature include Aguirregabiria and Mira (2010); Keane et al. (2010). Recent advances in solution methods include Imai et al. (2009), Arcidiacono and Miller (2011), Kasahara and Shimotsu (2012) and Aguirregabiria and Magesan (2013).

  2. Kirby (2017, 2021) report 14 replications of dynamic macro papers using it. QuantEcon (https://quantecon.org/) is a collective effort to create useful tools.

  3. As Dynare and VI Toolkit are packages in Matlab, niqlow is written in Ox, which is free for research purposes and runs on most systems. Current niqlow syntax is used here and the code is included in the examples in niqlow distribution. The current version has no graphical user interface or menu system, but the OOP approach makes it straightforward to build one.

  4. If Wolpin (1984) had followed MaCurdy (1981) it would have relied on a panel probit model. However, unlike Euler equation based model, the forward-looking factors in a discrete choice model cannot be isolated to a single Lagrange multiplier. An "approximate" structural approach in Wolpin (1984) that avoided a nested solution would have likely been a poor approximation and possibly more costly to compute than the exact solution.

  5. Statistical packages such as Stata rely on OOP in the underlying code, but users are somewhat sheltered from OOP concepts in using them. No other platform I am aware is designed as a general platform for doing model-based empirical economics, whether object-oriented or not.

  6. In the context of solving economic models, "data" refers here not just to the observations in a statistical analysis but also parameter values, prices, choices, state variables, etc. These are the quantities that the program is processing in order to solve and estimate a model.

  7. The emphasis in niqlow is placed on discrete actions and discrete states, but some elements of the core code includes continuous state variables and continuous choices can be incorporated. Methods for continuous time models can be added as well.

  8. The shock vector \(\zeta \) can be multiplicative instead of additive. The additive form is more common and is the default in niqlow.

  9. It is possible to describe dynamic programming without defining \(v\left(\alpha ,\theta \right).\) However, empirical DP explains probabilistic choices, usually by integrating over the addition shock. The values of individual choices are needed to compute the choice probabilities beyond defining or solving the DP.

  10. niqlow also includes model classes based on normally distributed additive shocks, both ex ante and ex post.

  11. This use of static variables to replace action and state variables is critical to memory management. If they were not static, each point in the state space would have its own version of the variables duplicated across the state space \(\Theta .\) As static members they do not increase memory requirements along with the state space. The state-specific (non-static) members which expand with the size of \(\Theta \) are kept to a minimum.

  12. An action counter has a deterministic transition. We can also express this as a stochastic process \(P(s^\prime ) = I_{s^\prime = s+I_{a=k}}.\) Each state variable class has a Transit() method which returns a pair of values: a vector of feasible integer values next period and a matrix of transition probabilities corresponding to feasible actions (rows) and feasible state values next period (columns). Because M was added to the model its Transit() function will be called at each state along with all other transitions. The transitions for all state variables are combined to form \(P(\theta ^{\,\prime };\alpha ,\theta )\) which ends up as a vector of feasible state indices and a matrix of probabilities.

  13. In between the normal aging and stationary are mixed and random clocks. For example, a model may have a sequential phase during t matters but eventually reach a stationary phase:

    $$\begin{aligned} t' = \begin{array}{ll} t+1 &{} \hbox {if }t < T-1\\ T-1 &{} \hbox {otherwise} \end{array} \end{aligned}$$

    In an ordinary lifecycle model \(T-1\) is a final period, but under this clock the agent stays in the final period forever. Also, a lifecycle model might incorporate early mortality:

    $$\begin{aligned} t' = \begin{array}{ll} t+1 &{}\hbox {prob. }1-\lambda (\theta )\\ T-1 &{}\hbox {prob. }\lambda (\theta ) \end{array} \end{aligned}$$

    The last period \(T-1\) is death which may have an intrinsic value (such as bequests). The current value function depends on values for two different future times, \(t+1\) and \(T-1.\) These and other clocks are built into niqlow.

  14. In addition, niqlow stores the transition \(P\left(\theta ^{\,\prime };\alpha ,\theta \right)\) at each state because they are needed for simulation and prediction. The transitions are stored for each \(\theta \) using a sparse method that tracks only feasible new state indices and the vector of probabilities conditional on choices. The transitions of the IID vectors \(\epsilon \) and \(\eta \) are stored once and combined with the \(\theta \) transition to determine the full state-to-state process

  15. In complex models there are other ways states become unreachable. How the user specifies this is illustrated in Sect. 4.5.

  16. Computing EV naively involves a large matrix calculation that includes mainly zeros. Instead, niqlow uses Ox-specific syntax to reduce select only relevant matrix elements to process. Using an interpreted language such as Ox includes overhead, but features of the syntax such of this can result in fast as well as simple and general code.

  17. Since outcomes and predictions require solutions are available for the problem, the algorithms must solve each group’s problem and process it before the next one. The simple VISolve() function can only be used with heterogeneity to solve the problems. Use of the solved solutions requires nesting the a solution method within the use of the solution as discussed Sect. 4.4.

  18. Other important qualifiers include the presence of IID state variables and terminal states at which Bellman iteration is not applied. niqlow allows the user to control these and other details of the model.

  19. DP models coded in FORTRAN and relying on an array of indices to represent the state vector run into a hard-limit of 7 subscripts or counts of state variables. niqlow avoids this altogether by map** a multidimensional space into one dimension. That is, if \(\theta \) is a vector of state variable indices, then the state’s one-dimensional index is \(I\theta \), where I is a row vector of offsets that depend on the number of values each state variable takes on. In addition, as in other interpreted languages, the length of a vector such as \(\theta \) can be set dynamically during runtime in Ox.

  20. One approach for computing likelihood with unobserved states is to use simulation of outcomes based on optimal choices. This is an effective way to calculate the likelihood for a given set of parameters. The complication is ensuring that the simulated value is continuous in estimated parameters.

References

  • Aguirregabiria, Victor, & Magesan, A. (2013). Euler equations for the estimation of dynamic discrete choice structural models. Advances in Econometrics, 31, 3–44.

    Article  Google Scholar 

  • Aguirregabiria, Victor, & Mira, Pedro. (2002). Swap** the nested fixed point algorithm: A class of estimators for discrete markov decision models. Econometrica, 70(4), 1519–43.

    Article  Google Scholar 

  • Aguirregabiria, Victor, & Mira, Pedro. (2010). Dynamic discrete choice structural models: A survey. Journal of Econometrics, 156(1), 38–67.

    Article  Google Scholar 

  • Arcidiacono, Peter, & Miller, Robert A. (2011). Conditional choice probability estimation of dynamic discrete choice models with unobserved heterogeneity. Econometrica, 79(6), 1823–1867.

    Article  Google Scholar 

  • Barber, Michael and Christopher Ferrall 2021. College choice, credit constraints and educational attainment, in progress, current version available from https://ferrall.github.io/OODP/

  • Eckstein, Zvi, & Wolpin, Kenneth. (1989). The specification and estimation of dynamic stochastic discrete choice models: A survey. Journal of Human Resources, 24(4), 562–598.

    Article  Google Scholar 

  • Ferrall, Christopher 2003. Estimation and Inference in Social Experiments, manuscript, Queen’s University working paper .

  • Ferrall, Christopher. (2005). solving finite mixture models: Efficient computation in economics under serial and parallel execution. Computational Economics, 25, 343–379.

    Article  Google Scholar 

  • Ferrall, Christopher 2021. Was Harold Zurcher Myopic After All? Replicating Rust’s Engine Replacement Estimates, Queen’s University working paper 1467, https://www.econ.queensu.ca/research/working-papers/1467.

  • Hotz, V. Joseph., & Miller, Robert A. (1993). Conditional choice probabilities and the estimation of dynamic models. The Review of Economic Studies, 60(3), 497–529.

    Article  Google Scholar 

  • Imai, Susumu, Jain, Neelan, & Ching, Andrew. (2009). Bayesian estimation of dynamic discrete choice models. Econometrica, 77(6), 1865–1899.

    Article  Google Scholar 

  • Kasahara, Hiroyuki, & Shimotsu, Katsumi. (2012). Sequential Estimation of Structural Models With a Fixed Point Constraint. Econometrica, 80(5), 2303–2319.

    Article  Google Scholar 

  • Keane, Michael P., & Wolpin, Kenneth I. (1994). The solution and estimation of discrete choice dynamic programming models by simulation and interpolation: Monte carlo evidence. The Review of Economics and Statistics, 76(4), 648–672.

    Article  Google Scholar 

  • Keane, Michael P., Petra E. Todd, and Kenneth I. Wolpin 2010. The structural estimation of behavioral models: Discrete choice dynamic programming methods and applications. In: Orley Ashenfelter and David Card (Eds.), Handbook of labor economics, Volume 4a. pp. 331–461.

  • Kirby, R. A. (2017). Toolkit for value function iteration. Computational Economics, 49, 1–15.

    Article  Google Scholar 

  • Kirby, R. A 2021. Quantitative macro: Lessons learnt from fourteen replications, manuscript, University of Wellington.

  • MaCurdy, Thomas. (1981). An empirical model of labor supply in a life-cycle setting. Journal of Political Economy, 89(6), 1059–1085.

    Article  Google Scholar 

  • Rust, John. (1987). Optimal replacement of GMC bus engines: An empirical model of Harold Zurcher. Econometrica, 55(5), 999–1033.

    Article  Google Scholar 

  • Rust, John 2000. Nested fixed point algorithm documentation manual, version 6, https://editorialexpress.com/jrust/nfxp.pdf.

  • Wolpin, Kenneth. (1984). An estimable dynamic stochastic model of fertility and child mortality. Journal of Political Economy, 92(5), 852–874.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christopher Ferrall.

Ethics declarations

Conflict of Interest

The author did not receive support from any organization for the submitted work. The author has no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A. Example of PP vs. OOP Coding

To illustrate differences between OOP and PP, consider a package written by a programmer to be used by economists (users): first using the PP paradigm only and then using OOP. The package, named Marshall, solves for Marshallian demand for a consumer with utility U(x) defined on a vector x and a given price vector p and income m : 

$$\begin{aligned} x^{\star} (p,m;U)\quad \equiv \quad {\arg \max }_{\ x: px\le m\quad } U(x). \end{aligned}$$

Most readers have probably coded an objective and then called a built-in optimization procedure to optimize that function. Marshall is a specialized version of that general problem.

The PP package documentation explains how users should code U() in order to interact with tools in the package. The user codes u(x), and sends it to a built-in procedure of the form demand(u,p,m). That procedure uses algorithms to compute \(x^{\star} (p,m).\) Suppose the user wants to use the Cobb-Douglas function, \(U(x) = {\sum } \ln x_i.\) Using "pseudo-code" the key parts of the user’s program might look like:

figure u

Now consider the OOP version of Marshall. The programmer might define a class for a consumer:

figure v

The syntax is pseudo code similar to actual OOP languages, including Ox. The method u(x) belongs to the Consumer class and does the work of computing utility. The budget parameters are stored as members of the class. These will be set by passing them to the budget() method. The method demand() is the same as the PP procedure above, but it will get the information it needs from the data members rather than from arguments. It stores the result in the member xstar.

The package comes with the Cobb-Douglas function set as the a default to demonstrate the package without any coding. Now u() is coded as a method of the Consumer class:

figure w

Unlike an ordinary function, the code for u() has the prefix of the class it belongs to. Code for the other methods would also be part of the package. User code to create a Consumer object, set the budget to already-defined values, and solve for \(x^{\star} \) might look like this:

figure x

Here the new operator makes an object from the Consumer template and stores it in the variable agent. The code above would use the built-in utility and compute quantity demanded at the prices and income sent to the budget operator. The syntax object -> method() is a common way to invoke a method for a particular object. That is, instead of sending u to demand() in the PP approach, the data specific to agent is automatically available to the demand() method belonging to agent.

The code so far uses a built-in utility. To use, say, a CES utility the user creates a class derived from Consumer.

figure y

The first line shows that CES is a child of Consumer. The new class does not declare its own demand() method, because CES inherits the version from Consumer. The user also provides a method that is called to create a new object. This constructor has the same name as the class in the pseudo code. The CES parameter is passed to it and stored in CES.a, ready to be used by CES::u().

When demand() is invoked, the user-provided CES::u() is called instead of the default version written by the programmer. The user has not changed, and perhaps cannot even see, the original code for demand(). This is because the programmer marked Consumer::u() as virtual. By doing so the programmer gives the user a controlled ability to change the underlying code through insertion a replacement function. In the PP packaged this injection of code was accomplished by passing u to the demand function. When many functions need to be replaced in many algorithms the PP framework can become unwieldily and unreliable. The OOP approach scales more efficiently for both the programmer and the user with problem complexity. With OOP it is easier to ensure the right data and the right functions are being used within the package.

The programmer can create a taxonomy of classes for the user to choose from. In this simple case, Marshall might not just have a single Consumer class. It might have child classes for different classes of utility. The user can then start with one of those classes to specialize or extend it for their model. An OOP package can provide the user with a menu of options, an important part of the niqlow approach to empirical DP.

Appendix B: Code Segments and Additional Explanation of niqlow Features.

1.1 CV() and AV() in Code Segment E

If the action counter M were an ordinary programming counter taking on values between 0 and 39 then the statement E.1 would not require sending it to CV(). A simple assignment D=M would set D to the current value of M. However, M is an object of a class so it is not the same as its value. Instead, the current or counter value of M is a member of the object. Classes in niqlow that represent DP variables have a member v that holds the counter or current value of the variable. The internal code sets the value of v to correspond to the current state \(\theta \) before code such as Utility() is called. The function CV(M) returns M.v. So either D=M.v; or D=CV(M); is how user code would set D to the value of M at the current state.

Recall that utility is treated as a vector valued function corresponding to the feasible set \(A(\theta ).\) Since m is an action variable its current value is not a scalar at \(\theta .\) In this simple one-choice model the current value is always the same: \(CV(m) = \left(\begin{array}{l}0\\ 1\end{array}\right).\) If other actions were added, or if constraints on feasible actions were imposed on choice, the current value of m would be a different length because the number of distinct actions \(\alpha \) would change.

Since M is a simple count variable, it takes on values like a loop counter or index. The earnings shock e, is also like a loop counter, so its Its counter value ranges from 0 to 14. However, e is a discretized normal random variable and the counting values are associated with both positive and negative real numbers, i.e. quantiles of the standard normal distribution, such as \(-1.282,\) the 10th percentile of N(0, 1). The user’s code can carry out these transformations of the integer value e.v, but niqlow can track actual values of variables for the user.

The actual values of an object are stored as vector member actual. Thus the actual value at any point is actual[v]. The current value is an index into the actual vector. The function AV() function retrieves this value, so when \(CV(e)=3\) AV(e) might equal -1.282. For M the actual vector is (0 1 ...39) and actual[v]=v. That is, the default is that AV(s)=CV(s). Only in the case like a discretized normal will there be a difference. The user can set actual values for their state and action values and can make them dependent on structural parameters.

1.2 Semi-Exogenous State Variables

An example of an IID process that could be sorted into \(\eta \) but not \(\epsilon \) is a wage offer h with on-the-job search. If the offer is accepted it determines earnings this period also the existing wage next period. The existing wage is then a state variable placed in \(\theta \) because its transition depends on choices. Next period a new IID outside shock h is realized as well, but current h affected the transition beyond its influence on the action \(\alpha \) so it must be placed in \(\eta \) not \(\epsilon .\)

figure z

1.3 Additional Actions and Restricted Choices

Starting with the basic labor supply model suppose the user wants to add a choice to attend school or not (s) and a state variable to track accumulated schooling: \(S^\prime = S + s.\) The agent cannot attend school and work in the same period, so the choice vector and feasible set are now

$$\begin{aligned} \alpha = \left(\begin{array}{ll}s&a\end{array}\right) \in A(\theta ) \equiv \left\{ \alpha :\ s*a = 0\right\} . \end{aligned}$$

Although initialization and space creation can only occur once, in between new variables can be added to the vectors more than once. So the extended build is simply:

figure aa

The base version added m to the action vector. This adds s to it. S is an action counter like M but limited to 8 years of additional schooling to reduce the size of the state space.

The user must tell niqlow to impose the condition that the agent can either work or study but not both. This creates an additional trimming of unreachable states: \(M+S\le t.\) In this approach the agent has to impose this extra condition on reachable states. The user replaces two built-in virtual methods with their versions:

figure ab

The first returns a vector of ones and zeros that indicates whether an action \(\alpha \) is feasible at the current state \(\theta .\) It says the product of each row of the action vector must be 0. Ox syntax allows the expression to closely match the definition of \(A(\theta ).\) The second returns a scalar 0 or 1 to indicate whether the current state is reachable from initial conditions. It needs to know what the current value of t is which up until now was not required. Since the clock is stored internally niqlow places its current value in the I class, so I::t is always available as well as other indices of the current state.

1.4 Augmented State Variables

The current library of pre-defined state variable types in niqlow include counters, accumulators, lagged values, durations, and discrete jump processes. A user can also provide a Markov transition matrix for an arbitrary process. Many estimated DP models contain state variables that customize these basic versions. For example, some models may "freeze" a state variable at it’s current value after some period \(t^{\star} .\) In some models a state variable stops being relevant to the agent’s problem at some point. For example, a model of schooling and work might track credits earned while still in school, but once out in the labor market credits no longer matter and tracking them is inefficient. These are called augmented state variables in niqlow.

Here are 3 augmented state variables:

figure ac

The first one starts with the schooling variable added to the labor supply model. Instead of adding it directly it is augmented to freeze at its value from \(t=15\) and forward. The base variable x is is created and sent to Freeze which then wraps the augmented transition rules around the base transition.

State variable y augments a base state variable b (not shown here) so that its value resets to 0 whenever the agent sets the action variable a to 1 (also defined elsewhere). This is a special case of the general Triggered() augmentation. Several triggers besides the simple reset are already coded. Finally, z is a double augmentation of a state variable d and another state variable tvar. First, when tvar=1 z will reset to 5. From \(t=20\) and onward the value of z is not tracked because of the ForgetAtT augmentation. Forgetting a sate variable means its value is simply \(CV(z)=0\) from then on, which avoids expanding the state space unnecessarily.

1.5 Complete Code and Output for the Labor Supply Example

This Ox code is available in the examples folder in the niqlow download. A few lines are different than the code in the main body to account for the reservation wage extensions (Fig. 5).

figure ad
Fig. 5
figure 5

Predictions for the Labor Supply Model

How can we confirm that these results are correct? First, it is eaasier to check the CCP and EV values at each state instead of these averaged predictions. However in general the best way to confirm results is to set parameters of the problem so that output can be compared to known true values.

In this model we can push the smoothing parameter from 1.0 to near 0 and to very large:

figure ae

Make the agent myopic so that only current utility matters. CCP’s are easy to compute at any state:

figure af

noindent Make the environment static by eliminating the effect of endogenous states:

figure ag

Once the extremes are confirmed we can deform the problems slightly and see that the output moves in the right direction.

Unlike purpose-built code, in niqlow these kinds of tests use the same underlining code for all models. In addition, the behavior of a state variable class, such as ActionCounter can be confirmed in a small test program. Then it is very likely it will perform correctly in any other problem.

Certainly not all bugs have been discovered let alone fixed in the current niqlow code. And varying parameters of a single model will not reveal all errors. However, as changes are made that might break code that used to work, a suite of test programs are run to make sure that output is still correct. In that sense the changes to experiments on the labor supply model are important for a user to run in order to overcome healthy skepticism. However, the kinds of errors that might uncover have been squashed by checking test program output that include many other features than the labor supply model but in smaller spaces. There are also debugging features that can be turned on to trace output when a bug has been discovered.

Appendix C: Solution Algorithms

These algorithms are explained in Sect. 4.2. Aguirregabiria and Mira (2010) review several methods in more detail.

figure ah
figure ai
figure aj
figure ak

Appendix D: Reservation Values

For a binary choice a, there is a single \(z^{\star} \) at each (implicit) state that solves \(v(1,z^{\star} )-v(0,z^{\star} ) =0.\) Let \(EV_i\) denote \(EV\left(\theta ^{\,\prime }\,|\,a=i\right)\). After some rearranging \(z^{\star} \) satisfies

$$\begin{aligned} U(1,z^{\star} )-U(0,z^{\star} ) = \delta \left[ EV_0-EV_1\right]. \end{aligned}$$
(35)

The differences between current and future values balance, and (35) are solved as a non-linear equation. The condition that future expected values cannot depend on current z,  avoids a more complicated condition. Each additional choice beyond binary adds another equation between adjacent values to be solved simultaneously.

Once \(z^{\star} \) has been found, Bellman iteration requires calculation of the expected value of arriving at the (now explicit) state \(\theta :\)

$$\begin{aligned} EV(\theta )= & {} G(z^{\star} )\left( E\left[U|a=0,z< z^{\star} \right] + \delta EV_0 \right)\nonumber \\&+ \left(1-G(z^{\star} )\right)\left( E\left[U|a=1,z\ge z^{\star} \right] + \delta EV_1 \right). \end{aligned}$$
(36)

These conditions are illustrated in Fig. 6. Conditions (35), (36) include three additional objects that the user’s code must provide. The first condition for \(z^{\star} \) needs \(U\left(A(\theta ),z\right),\) a vector of utilities for candidate values of z while solving for \(z^{\star} .\) The second includes \(F(z^{\star} )\) and the vector of expected conditional utilities, \(E\left[U\left(0\right)| z< z^{\star} \right]\) and \(E\left[U\left(1\right)|z\ge z^{\star} \right].\)

Fig. 6
figure 6

Conditions for Reservation Values

The model must satisfy several conditions described there to be eligible for this solution algorithms. These are enforced by allowing only models derived from the OneDimensionalChoice class to use the ReservationValues method.

  1. 1.

    \(\alpha \) must be one-dimensional (only one variable added to the action vector);

  2. 2.

    No smoothing shock \(\zeta \) is included (because Z plays that role);

  3. 3.

    No state variables are placed in \(\epsilon \) and \(\eta \) vectors (because reservation values must be stored using \(\theta \));

  4. 4.

    Choice values \(v\left(\alpha ,\theta \right)\) must the single-crossing property.

The user must parameterized the model to enforce the last condition. Any IID state variables must stay in \(\theta \) so that the value of \(z^{\star} \) can be stored conditional on their values as well.

OneDimentionsalChoice adds two virtual functions that other Bellman classes do not. The two functions are Uz() and EUtility(). At \(z^{\star} \) the difference in current utility equals the discounted difference in future values. The user codes Uz(z) to return utility of all choices at z. The reservation value solution method uses that to solve for \(z^{\star} .\) Backward induction requires the expected utility of each option conditional given the optimal value \(z^{\star} \), hence the need to provide EUtility().

figure al

1.1 Code for the Reservation Value Labor Supply Model

For the basic labor supply model, the earnings shock e is a discretized normal. Change that assumption so it is a continuous standard normal random variable, re-labeled z. The class created for this version will be named LSz. Earnings are now written

$$\begin{aligned} E = \exp \{\beta _0+\beta _1M +\beta _2M^2 + \beta _3 z\}. \end{aligned}$$

Since E exceeds the value of not working (\(\pi \)) for large enough values of z there is a reservation value for the choice \(m=1.\) Since LSz must be a one-dimensional choice model it cannot be derived from LS as was done earlier with LSext.

However, code from the basic model can be reused because most elements of LS are static. The only non-static member was Utility(). By using the static elements a change to LS will still be reflected automatically in LSz. In particular, LSz::Uz() can use LS::Earn() to utility at z. First set the value of LS::e to z then calls LS::Earn().

In the model expected utility of not working is simply \(\pi \). For working, expected utility involves \(E[e^{\beta _3z}|z>z^{\star} ]\). The Mills-ratio formula for log-normality gives expected utility conditional of acceptance as

$$\begin{aligned} E[U|m=1,z>z^{\star} ] = \exp \left\{ \beta _0+\beta _1M+\beta _2M^2+{\beta _3^2\over 2}\right\} {\Phi \left(z^{\star} /\beta _3 - \beta _3\right)\over \Phi \left(z^{\star} \right)}. \end{aligned}$$
(37)

The constant factor includes \(\beta _3^2/2\), because it is coefficient on the normal random variable z (and hence the variance of the model shock). The ratio of \(\Phi ()\) values is new to the continuous specification and can’t be borrowed from the discretized LS.

However, note that only the constant term includes the Mincer equation, and it is the same as the original earnings function if \(e=\beta _3/2.\) So again the base LS::Earn() function can be used by setting the value of e first. Thus, even though LS was coded for a completely different approach its specification is still synchronized with the reservation wage version. Only elements specific to the new version need to be coded.

This code segment converts the labor supply model to a continuous choice reservation value problem.

figure am

Appendix E: Estimation on Simulated Labor Supply Data

Code segment K estimated the labor supply model from external data. To use simulated data the code is modified slightly:

figure an

The second line simulates 1000 observations over full 40 year lifetimes. The simulated data could be printed to a file and then read in as segment M does. In this case the simulated data is already contained in the object so it is ready to be used for estimation.

Abbreviated output is below. The code detects that none of the observations are full CCP because e is unobserved. Since all actions and endogenous states \(\theta \) are observed all paths are categorized as IID. This means niqlow will automatically sum over \(\epsilon \) to compute the likelihood but it is unnecessary to use the backward Algorithm 6. The measurement error on noisy earnings is fixed at its true value. The result is convergence after 5 BHHH iterations with weak convergence. Because by default niqlow scales starting parameters the reported standard errors would need to be re-scaled, or recomputed with scaling and constraining turned off (details available in the documentation). Having started at the true parameter values, the sample likelihood is within .01% of the initial value with some differences in parameter estimates, particularly the coefficient on experience (M) (Fig. 7).

Fig. 7
figure 7

Output of MLE Estimates

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ferrall, C. Object Oriented (Dynamic) Programming: Closing the “Structural” Estimation Coding Gap. Comput Econ 62, 761–816 (2023). https://doi.org/10.1007/s10614-022-10280-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10614-022-10280-4

Keywords

Navigation