1 Introduction

The landscape of statistical computing has undergone dramatic changes over the past 20 years, on account of the advent of massive data sets and cross-fertilization with methods and ideas originating from the machine learning community. In this context, availability of quality software is of paramount importance; not only in terms of algorithmic efficiency, but also with a view to ensuring fully replicable results and broad access to the necessary tools. Free, open source software—of which the econometrics package gretl is an example—has an important role to play in achieving these goals.

This special issue brings together some of the best papers presented at the 8th gretl Conference held in Gdańsk, Poland in June 15–16, 2023. Gretl has been available since 2000 from gretl.sourceforge.net in the form of source code, with packages for Microsoft Windows and macOS following in 2005. Compiled packages are also available via many Linux distributions, notably Debian and Fedora. Core gretl is written in the C programming language, but many recent extensions of gretl’s functionality are coded in hansl, gretl’s high-level scripting language. Biennial gretl conferences have been held in odd-numbered years since 2009, mostly in Europe with the exceptions of 2013 (Oklahoma City) and 2021 (Zoom). These are convivial affairs; gretl users are encouraged to attend, and to submit papers if possible. The conference website is at www.gretlconference.org.

Gretl is somewhat unusual in the domain of econometric software in offering a full-featured graphical interface, but its developers are keen to dispel the idea that the program is “just a pretty face”. The papers selected for this issue should help in that regard, illustrating as they do the usability of hansl for advanced econometric purposes, as well as the concern of the gretl team with fast and efficient computation.

2 This issue

The four papers we have selected cover a range of topics, from optimization of gretl’s numerical efficiency to econometric methods such as time-varying parameter models, Bayesian regression models in general, and Bayesian VAR models in particular.

Marcin Błażejowski is a proponent of Bayesian model averaging, he and his colleague Jacek Kwiatkowski having contributed two gretl function packages in this area. Such work is highly computation-intensive and in Błażejowski (2024) he addresses the question of how a savvy user can get the best numerical performance out of gretl on modern hardware. (Although the specifics are to do with gretl, users of other software might also pay attention.) He compares both compilers and matrix-arithmetic libraries. Out-of-the box gretl is generally compiled using gcc and linked against OpenBlas, a modern open-source implementation of BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra Package). The paper explores the possible benefits from compiling gretl using clang/LLVM, and linking against the math libraries produced the CPU manufacturers Intel and AMD. He’s able to show that in some contexts, at least, there can be a substantial gain by using Intel’s MKL (Math Kernel Library) if you’re working on Intel-based hardware.

Keynes was famously skeptical of econometrics, in large part because he reckoned the material of macroeconomics was clearly “non-homogeneous through time”. Can modern methods that allow for time-varying parameters render his skepticism obsolete? That’s a tough question, but Lucchetti and Valentini (2024) help us towards an answer by comparing four such methods, including the Kalman filter, via both simulation and a “real world” example (Okun’s Law). The comparison proceeds along two main axes: computational complexity, and ability to handle abrupt changes in the parameters. If a method has a complexity that’s quadratic in the number of observations that can be a problem for handling long time series. If we reckon that on occasion parameters may change abruptly that argues in favour of methods that can accommodate step changes against those that exhibit an unstable response (primarily, kernel-based estimators).

Traditionally econometricians have leaned strongly towards frequentist methods, but Bayesian methods have clearly been gaining ground for some time now. Until quite recently gretl had nothing to offer in the Bayesian department, but Luca Pedini has remedied that with his BayTool package and (in collaboration with Sven Schrieber) the BVAR package. Pedini (2024a) introduces BayTool, which supports Bayesian estimation and post-estimation of a number of widely used econometric specifications, including the linear model, LASSO and probit. The methods are explained in rigorous detail, along with step-by-step replication exercises using hansl.

Core gretl has long had strong support for frequentist VARs and Vector Error Correction Models (VECMs), complemented by the SVAR add-on for structural VARs. Pedini (2024b) discusses the relatively new BVAR package, which brings Bayesian VAR models–a staple of macroeconometrics today–to gretl. Again, Pedini gives a rigorous and lucid explanation along with step-by-step demonstrations and replications of three well-known examples from Kilian and Lütkepohl (2017).