# Control Variates for Reversible MCMC Samplers

###### Abstract

A general methodology is introduced for the construction and effective application of control variates to estimation problems involving data from reversible MCMC samplers. We propose the use of a specific class of functions as control variates, and we introduce a new, consistent estimator for the values of the coefficients of the optimal linear combination of these functions. The form and proposed construction of the control variates is derived from our solution of the Poisson equation associated with a specific MCMC scenario. The new estimator, which can be applied to the same MCMC sample, is derived from a novel, finite-dimensional, explicit representation for the optimal coefficients. The resulting variance-reduction methodology is primarily applicable when the simulated data are generated by a conjugate random-scan Gibbs sampler. MCMC examples of Bayesian inference problems demonstrate that the corresponding reduction in the estimation variance is significant, and that in some cases it can be quite dramatic. Extensions of this methodology in several directions are given, including certain families of Metropolis-Hastings samplers and hybrid Metropolis-within-Gibbs algorithms. Corresponding simulation examples are presented illustrating the utility of the proposed methods. All methodological and asymptotic arguments are rigorously justified under easily verifiable and essentially minimal conditions.

Keywords — Bayesian inference, control variates, variance reduction, Poisson equation, Markov chain Monte Carlo, log-linear model, mixtures of normals, hierarchical normal linear model, threshold autoregressive model.

## 1 Introduction

Markov chain Monte Carlo (MCMC) methods provide the facility to draw, in an asymptotic sense, a sequence of dependent samples from a very wide class of probability measures in any dimension. This facility, together with the tremendous increase of computer power in recent years, makes MCMC perhaps the main reason for the widespread use of Bayesian statistical modeling and inference across the spectrum of quantitative scientific disciplines.

This work provides a methodological foundation for the construction and use of control variates in conjunction with reversible MCMC samplers. Although popular in the standard Monte Carlo setting, control variates have received much less attention in the MCMC literature. The proposed methodology will be shown, both via theoretical results and simulation examples, to reduce the variance of the resulting estimators significantly, and sometimes quite dramatically.

In the simplest Monte Carlo setting, when the goal is to compute the expected value of some function evaluated on independent and identically distributed (i.i.d.) samples , the variance of the standard ergodic averages of the can be reduced by exploiting available zero-mean statistics. If there are one or more functions – the control variates – for which it is known that the expected value of each is equal to zero, then subtracting any linear combination, from the does not change the asymptotic mean of the corresponding ergodic averages. Moreover, if the best constant coefficients are used, then the variance of the estimates is no larger than before and often it is much smaller. The standard practice in this setting is to estimate the optimal based on the same sequence of samples; see, e.g., Liu (2001), Robert and Casella (2004), or Givens and Hoeting (2005). Because of the demonstrated effectiveness of this technique, in many important areas of application, e.g., in computational finance where Monte Carlo methods are a basic tool for the approximate computation of expectations, see Glasserman (2004), a major research effort has been devoted to the construction of effective control variates in specific applied problems.

The main difficulty in
extending the above methodology to
estimators based on MCMC samples is probably
due to the intrinsic complexities presented
by the Markovian structure.
On one hand it is hard to find nontrivial,
useful functions
with known expectation with respect to the stationary
distribution of the chain^{1}^{1}1For example,
Mengersen et al. (1999) comment
that “control variates have been advertised early in the MCMC
literature (see, e.g., Green and Han (1992)), but they
are difficult to work
with because the models are
always different and their complexity is such that it is
extremely challenging to
derive a function with known expectation.”,
and, even in cases
where such functions are available, there has been
no effective way to obtain consistent
estimates of the corresponding optimal coefficients
. An important underlying reason
for both of these difficulties is the basic fact
that the MCMC variance of ergodic averages is
intrinsically an infinite-dimensional object:
It cannot be expressed in closed form as a function
of the transition kernel and the stationary distribution
of the chain.

An early reference of variance reduction for Markov chain samplers is Green and Han (1992), who exploit an idea of Barone and Frigessi (1989) and construct antithetic variables that may achieve variance reduction in simple settings but do not appear to be widely applicable. Andradòttir et al. (1993) focus on finite-state space chains, they observe that optimum variance reduction can be achieved via the solution of the associated Poisson equation (see equation (3) below and Section 2.1 for details), and they propose numerical algorithms for its solution. Rao-Blackwellisation has been suggested by Gelfand and Smith (1990) and by Robert and Casella (2004) as a way to reduce the variance of MCMC estimators. Also, Philippe and Robert (2001) investigated the use of Riemann sums as a variance reduction tool in MCMC algorithms. An interesting as well as natural control variate that has been used, mainly as a convergence diagnostic, by Fan et al. (2006), is the score statistic. Although Philippe and Robert (2001) mention that it can be used as a control variate, its practical utility has not been investigated. Atchadé and Perron (2005) restrict attention to independent Metropolis samplers and provide an explicit formula for the construction of control variates. Mira et al. (2003) note that best control variate is intimately connected with the solution of the Poisson equation and they attempt to solve it numerically. Hammer and Håkon (2008) construct control variates for general Metropolis-Hastings samplers by expanding the state space.

In most of the works cited above, the method used for the estimation of the optimal coefficients is either based on the same formula as the one obtained for control variates in i.i.d. Monte Carlo sampling, or on the method of batch-means, but such estimators are strictly suboptimal and generally ineffective; see Section 1.3 for a summary and Section 7.2 for details.

For our purposes, a more relevant line of work is that initiated by Henderson (1997), who observed that, for any real-valued function defined on the state space of a Markov chain , the function has zero mean with respect to the stationary distribution of the chain. Henderson (1997), like some of the other authors mentioned above, also notes that the best choice for the function would be the solution of the associated Poisson equation and proceeds to compute approximations of this solution for specific Markov chains, with particular emphasis on models arising in stochastic network theory.

The gist of our approach is to adapt Henderson’s idea and use the resulting control variates in conjunction with a new, efficiently implementable and provably optimal estimator for the coefficients . The ability to estimate the effectively makes these control variates practically relevant in the statistical MCMC context, and avoids the need to compute analytical approximations to the solution of the underlying Poisson equation.

### 1.1 Outline of the proposed basic methodology

Section 2.1 introduces the general setting within which all the subsequent results are developed. A sample of size from an ergodic Markov chain is used to estimate the mean of a function , under the unique invariant measure of the chain. The associated Poisson equation is introduced, and it is shown that its solution can be used to quantify, in an essential way, the rate at which the chain converges to equilibrium.

In Section 2.2 we examine the variance of the standard ergodic averages,

(1) |

and we compare it with the variance of the modified estimators,

(2) |

Here and throughout the subsequent discussion, the control variates , are constructed as above via, , where denotes the one-step expectation , for particular choices of the functions , .

The two central methodological issues addressed in this work are: (i) The problem of estimating the optimal coefficient vector that minimizes the variance of the modified estimates (2); and (ii) The choice of the functions , so that the corresponding functions will be effective as control variates in specific MCMC scenarios that arise from common families of Bayesian inference problems.

For the first issue, in Section 3, we derive new representations for the optimal coefficient vector , under the assumption that the chain is reversible; see Proposition 2. These representations lead to our first main result, namely, a new estimator for ; see equations (25) and (26). This estimator is based on the same MCMC output and it can be used after the sample has been obtained, making its computation independent of the MCMC algorithm used.

The second problem, that of selecting an effective collection of functions for the construction of the control variates , is more complex and it is dealt with in stages. First, in Section 2.2 we recall that there is always a single choice of a function that actually makes the estimation variance equal to zero: If satisfies,

(3) |

then with this control variate and with the modified estimates in (2) are equal to the required expectation for all . A function satisfying (3) is often called a solution to the Poisson equation for [or Green’s function]. But solving the Poisson equation even for simple functions is a highly nontrivial task, and for chains arising in typical applications it is, for all practical purposes, impossible; see, e.g., the relevant comments in Henderson (1997) and Meyn (2007). Therefore, as a first rule of thumb, we propose that a class of functions be chosen such that the solution to the Poisson equation (3) can be accurately approximated by a linear combination of the . For this reason we call the basis functions.

Clearly there are many possible choices for the basis functions , and the effectiveness of the resulting control variates depends heavily on their particular choice. In order to obtain more specific and immediately-applicable guidelines for the selection of the , in Section 4 we note that there are two basic requirements for the immediate applicability of the methodology described so far; the underlying chain needs to be reversible in order for the estimates of the coefficient vector introduced in Section 3 to be consistent, and also, the one-step expectations that are necessary for the construction of the control variates need to be explicitly computable.

Since the most commonly used class of MCMC
algorithms satisfying both of these
requirements is that of conjugate random-scan
Gibbs samplers,^{2}^{2}2Following standard
parlance, we call a Gibbs sampler ‘conjugate’
if the full conditionals of the target distribution
are all known and of standard form. This, of course,
is unrelated to the notion of a conjugate prior
structure in the underlying Bayesian formulation.
and since the most accurate
general approximation of the target distribution
arising in Bayesian inference problems
is a general multivariate Gaussian,
we examine this MCMC problem in detail
and obtain our second main result.
Suppose that we wish to
estimate the mean of one of the co-ordinates
of a -dimensional Gaussian distribution ,
based on samples
generated by the
random-scan Gibbs algorithm. In Theorem 1
we show that the solution of the associated
Poisson equation can always be expressed as
a linear combination of the co-ordinate
functions ,
.
This is perhaps the single most interesting
case of a Markov chain naturally arising in
applications for which an explicit solution
to the Poisson equation has ever been obtained.

The above results naturally lead to the following proposal, stated in detail in Section 4. It is the core methodological contribution of this work.

The basic methodology. Suppose is a multivariate posterior distribution for which MCMC samples are obtained by a reversible Markov chain . In order to estimate the posterior mean of the th co-ordinate , let , define basis functions as the co-ordinate functions for all components for which is explicitly computable, and form the control variates .

Then estimate the optimal coefficient vector by the estimator given in (25), and estimate the posterior mean of interest by the modified estimators given in (27):

(4)

Section 5 contains three MCMC examples using this methodology. Example 1 is a brief illustration of the result of Theorem 1 in the case of a bivariate Gaussian. As expected, the modified estimators (4) are seen to be much more effective than the standard ergodic averages (1), in that their variance is smaller by a factor ranging approximately between 4 and 1000, depending on the sample size. Example 2 contains an analysis of a realistic Bayesian inference problem via MCMC, for a 66-parameter hierarchical normal linear model. There, we consider all 66 problems of estimating the posterior means of all 66 parameters, and we find that in most cases the reduction in variance resulting from the use of control variates as above is typically by a factor ranging between 5 and 30. The third example illustrates the use of the basic methodology in the case of Metropolis-within-Gibbs sampling from a heavy-tailed posterior. Even though the posterior distribution is highly non-Gaussian, and also the one-step expectation can be computed for only one of the two model parameters, we still find that the variance is reduced by a factor ranging approximately between 7 and 10.

### 1.2 Extensions and applications

Domain of applicability. The present development not only generalizes the classical method of control variates to the MCMC setting, but it also offers an important advantage. In the case of independent sampling, the control variates for each specific application need to be identified from scratch, often in an ad hoc fashion. In fact, for most Monte Carlo estimation problems there are no known functions that can be used as effective control variates. In contrast, the basic methodology described above provides a way of constructing a family of control variates that are immediately applicable to a wide range of MCMC problems, as long as the sampling algorithm produces a reversible chain for which the one-step expectations can be explicitly computed for some simple, linear functions . MCMC algorithms with these properties form a large collection of samplers commonly used in Bayesian inference, including, among others: All conjugate random-scan Gibbs samplers (the main MCMC class considered in this work), certain versions of hybrid Metropolis-within-Gibbs algorithms, and certain types of Metropolis-Hastings samplers on discrete states spaces.

In Section 6 we discuss extensions of the basic methodology along three directions that, in some cases, go beyond the above class of samplers.

General classes of basis functions . Section 6 begins with the observation that, although it is often effective to take the basis functions to be the co-ordinate functions of the chain, in certain applications it is natural to consider different families of . Sometimes it is the structure of the MCMC sampler itself that dictates a more suitable choice, while sometimes, in estimation problems involving more complex models – particularly in cases where it is expected that the posterior distribution is highly non-Gaussian – it may be possible to come up with more effective basis functions by considering the form of the associated Poisson equation, as indicated by the first rule of thumb stated earlier. Finally, when the statistic of interest is not the posterior mean of one of the model parameters, there is little reason to expect that the co-ordinate functions would still lead to effective control variates; instead, we argue that it is more appropriate to choose functions leading to control variates that are highly correlated with the statistic of interest.

Particular instances of these three directions are illustrated via the three MCMC examples presented in Section 6 and summarized below.

‘Non-conjugate’ samplers. In Example 4, a non-conjugate Gibbs sampler is used for the analysis of the posterior of a 7-parameter log-linear model. There, the conditional expectations of the co-ordinate functions are not available in closed form, so that the basic methodology cannot be applied. But the nature of the sampling algorithm provides a natural and in a sense obvious alternative choice for the basis functions, for which the computation of the functions needed for the construction of the associated control variates is straightforward. With this choice of control variates, using the modified estimator in (4) to estimate the posterior means of the seven model parameters, we find that the variance is approximately between 3.5 and 180 times smaller than that of the standard ergodic averages (1).

Complex models. Example 5 is based on a two-component Gaussian mixtures model, endowed with the vague prior structure of Richardson and Green (1997). To estimate the posterior means of the two parameters corresponding to the Gaussian locations, we use a random-scan Gibbs sampler (in blocks). Here, although the prior specification is conditioned on the two means being ordered, it is possible to generate samples without the ordering restriction, and then to order the samples at the post-processing stage. The resulting Markov chain has an apparently quite complex structure, yet we show that the basic methodology can easily be applied in this case. What makes this example interesting for our present purposes is that, on one hand, the control variates based on the co-ordinate basis functions make little (if any) difference in the estimation accuracy, while choosing a different collection of basis functions gives a very effective set of control variates. Specifically, appealing to the first rule of thumb stated earlier, we define two simple basis functions such that, a linear combination of and can be seen, heuristically at least, to provide a potentially accurate approximation to the solution of the associated Poisson equation. With these control variates, the resulting estimation variance is reduced by a factor ranging approximately between 15 and 55.

General statistics of interest. The above general methodology was based on the assumption that the goal was to estimate the posterior mean of one of the parameters. But in many cases it may be a different statistic that is of interest. For example, it may be desirable to estimate the probability that one of the model parameters, say, would lie in a particular range, e.g., ; here, the function whose posterior mean is to be estimated would be the indicator function of the event . In such cases, there is little reason to believe that the co-ordinate functions would still lead to effective control variates. Instead, we argue that it is more appropriate, in accordance to the second rule of thumb derived in Section 2.2, to choose the so that the associated control variates are highly correlated with .

A particular such instance is illustrated in Example 6, where we examine a Bayesian model-selection problem arising from a two-threshold autoregressive model after all other parameters have been integrated out. The resulting (discrete) model space is sampled by a discrete random-walk Metropolis-Hastings algorithm. There, the quantity of interest is the posterior probability of a particular model. Defining as the indicator function of that model, a natural choice for the basis functions is to take , and to be the indicator functions of two different models that are close to the one whose posterior probability is being estimated. Because of the discrete nature of the sampler, the conditional expectations can be easily evaluated, and with these basis functions we find that the variance of the modified estimators in (4) is at least 30 times smaller than that of the standard ergodic averages (1).

### 1.3 Further extensions and results

In Section 7 we first briefly discuss two other consistent estimators for the optimal coefficient vector . One is a modified version of our earlier estimator derived in Section 3, and the other one was recently developed by Meyn (2007) based on the so-called “temporal difference learning” algorithm. Then in Section 7.2 we examine the most common estimator for the optimal coefficient vector that has been used in the literature, which as mentioned earlier is based on the method of batch-means. In Proposition 3 we show that the resulting estimator for is typically strictly suboptimal, and that the amount by which its variance is larger than the variance of our modified estimators is potentially unbounded. Moreover, the batch-means estimator is computationally more expensive and generally rather ineffective, often severely so. This is illustrated by re-visiting the three MCMC examples of Section 5 and comparing the performance of the batch-means estimator with that of the simple ergodic averages (1) and of our modified estimator in (4).

Section 8 provides complete theoretical justifications of the asymptotic arguments in Sections 2, 3 and 7. Finally we conclude with a short summary of our results and a brief discussion of possible further extensions in Section 9, with particular emphasis on implementation issues and on the difficulties of applying the present methodology to general Metropolis- Hastings samplers.

We close this introduction with a few more remarks on previous related work. As mentioned earlier, Henderson (1997) takes a different path toward optimizing the use of control variates for Markov chain samplers. Considering primarily continuous-time processes, an approximation for the solution to the associated Poisson equation is derived from the so-called “heavy traffic” or “fluid model” approximations of the original process. The motivation and application of this method is mostly related to examples from stochastic network theory and queueing theory. Closely related approaches are presented by Henderson and Glynn (2002) and Henderson et al. (2003), where the effectiveness of multiclass network control policies is evaluated via Markovian simulation. Control variates are used for variance reduction, and the optimal coefficients are estimated via an adaptive, stochastic gradient algorithm. General convergence properties of ergodic estimators using control variates are derived by Henderson and Simon (2004), in the case when the solution to the Poisson equation (either for the original chain or for an approximating chain) is known explicitly. Kim and Henderson (2007) introduce two related adaptive methods for tuning non-linear versions of the coefficients , when using families of control variates that naturally admit a non-linear parameterization. They derive asymptotic properties for these estimators and present numerical simulation results.

When the control variate is defined in terms of a function that can be taken as a Lyapunov function for the chain , Meyn (2006) derives precise exponential asymptotics for the associated modified estimators. Also, Meyn (2007, Chapter 11) gives a development of the general control variates methodology for Markov chain data that parallels certain parts of our presentation in Section 2, and discusses numerous related asymptotic results and implementation issues.

In a different direction, Stein et al. (2004) draw a connection between the use of control variates in MCMC and the “exchangeable pairs” construction used in Stein’s method for distribution approximation. They consider a natural class of functions as their control variates, and they estimate the associated coefficients by a simple version of the batch-means method described in Section 7.2. Finally, the recent work by Delmas and Jourdain (2009) examines a particular case of Henderson’s construction of a control variate in the context of Metropolis-Hastings sampling. Like Hammer and Håkon (2008), the authors expand the state space to include the proposals and they first take and (which, in part, explains why their WR algorithm is sometimes worse than plain Metropolis sampling). They identify the solution of the Poisson equation as the optimal choice for a basis function and they seek analytical approximations. Then a general linear coefficient is introduced, and for a particular version of the Metropolis algorithm the optimal value is identified analytically.

## 2 Control Variates for Markov Chains

### 2.1 The setting

Suppose is a discrete-time Markov chain with initial state , taking values in the state space , equipped with a -algebra . In typical applications, will often be a (Borel measurable) subset of together the collection of all its (Borel) measurable subsets. [Precise definitions and detailed assumptions are given in Section 8.] The distribution of is described by its transition kernel, ,

(5) |

As is well-known, in many applications where it is desirable to compute the expectation of some function with respect to some probability measure on , although the direct computation of is impossible and we cannot even produce samples from , it is possible to construct an easy-to-simulate Markov chain which has as its unique invariant measure. Under appropriate conditions, the distribution of converges to , a fact which can be made precise in several ways. For example, writing for the function,

then, for any initial state ,

for an appropriate class of functions . Furthermore, the rate of this convergence can be quantified by the function,

(6) |

where is easily seen to satisfy the Poisson equation for , namely,

(7) |

[To see this, at least formally, apply to both sides of (6) and note that the resulting series for becomes telescoping.]

The above results describe how the distribution of converges to . In terms of estimation, the quantities of interest are the ergodic averages,

(8) |

Again, under appropriate conditions the ergodic theorem holds,

(9) |

for an appropriate class of functions . Moreover, the rate of this convergence is quantified by an associated central limit theorem, which states that,

where , the asymptotic variance of , is given by, . Alternatively, it can be expressed in terms of as,

(10) |

The results in equations (6) and (10) clearly indicate that it is useful to be able to compute the solution to the Poisson equation for . In general this is a highly nontrivial task, and, for chains arising in typical applications, it is impossible for all practical purposes; see, e.g., the relevant comments in Henderson (1997) and Meyn (2007). Nevertheless, the function will play a central role throughout our subsequent development.

### 2.2 Control variates

Suppose that, for some Markov chain with transition kernel and invariant measure , the ergodic averages as in (8) are used to estimate the mean of some function under . In many applications, although the estimates converge to as , the associated asymptotic variance is large and the convergence is very slow.

In order to reduce the variance, we employ the idea of using control variates, as in the case of simple Monte Carlo with independent and identically distributed () samples; see, for example, the standard texts of Robert and Casella (2004), Liu (2001), Givens and Hoeting (2005), or the paper by Glynn and Szechtman (2002) for extensive discussions. Given one or more functions , the control variates, such that and for all , let be an arbitrary, constant vector in , and define,

(11) |

where denotes the column vector, . [Here and throughout the paper all vectors are column vectors unless explicitly stated otherwise, and denotes the usual Euclidean inner product.]

We consider the modified estimators,

(12) |

for . The ergodic theorem (9) guarantees that the estimators are consistent with probability one, and it is natural to seek particular choices for and so that the asymptotic variance of the modified estimators is significantly smaller than the variance of the standard ergodic averages .

Throughout this work, we will concentrate exclusively on the following class of control variates proposed by Henderson (1997). For arbitrary (-integrable) functions define,

Then the invariance of under and the integrability of guarantee that .

In the remainder of this section we derive some simple, general guidelines for choosing functions that produce effective control variates . This issue is revisited in more detail in the Bayesian MCMC context in Section 4.

Suppose, at first, that we have complete freedom in the choice of the functions , so that we may take , a single , and without loss of generality. Then the goal is to make the asymptotic variance of as small as possible. But, in view of the Poisson equation (7), we see that the choice yields,

which has zero variance. Therefore, the general principle for selecting a single function is:

Choose a control variate with .

As mentioned above, it is typically impossible to compute for realistic models used in applications. But it is often possible to come up with a guess that approximates , or at least with a collection of functions such that can be approximated as linear combination of the . Thus, our first concrete rule of thumb for choosing states:

Choose control variates , with respect to a collection

of basis functions , such that can be approximately expressed

as a linear combination of the .

The terminology basis functions for the is meant to emphasize the fact that, although is not known, it is expected that it can be approximately expressed in terms of the via a linear expansion of the form .

Once the basis functions are selected, we form the modified estimators with respect to the function as in (11),

where, for a vector of functions , we write for the corresponding vector, . The next task is to choose so that the resulting variance,

is minimized. Note that, from the definitions,

Therefore, by (10) and linearity,

(13) |

To find the optimal which minimizes the variance , differentiating the quadratic with respect to each and setting the derivative equal to zero, yields, in matrix notation,

where the matrix has entries, . Therefore,

(14) |

as long as is invertible. Once again, this expression depends on , so it is not immediately clear how to estimate directly from the data . The issue of estimating the optimal coefficient vector is addressed in detail in Section 3; but first let us interpret .

For simplicity, consider again the case of a single control variate based on a single function . Then the value of in (14) simplifies to,

(15) |

where the second equality follows from the earlier observation that . Alternatively, starting from the expression, simple calculations lead to,

(16) |

so that can also be expressed as,

(17) |

where denotes the covariance for the stationary version of the chain, i.e., since , we have , where . Then (17) leads to the optimal asymptotic variance,

(18) |

Therefore, in order to reduce the variance, it is desirable that the correlation between and be as large as possible. This leads to our second rule of thumb for selecting basis functions:

Choose control variates so that each is highly correlated with .

## 3 Estimating the Optimal Coefficient Vector

Consider, as before, the problem of estimating the mean of a function based on samples from an ergodic Markov chain with unique invariant measure and transition kernel . Instead of using the ergodic averages as in (8), we select a collection of basis functions and form the control variates , . The mean is then estimated by the modified estimators as in (12), for a given coefficient vector .

Under the additional assumption of reversibility, in this section we introduce a consistent procedure for estimating the optimal coefficient vector based on the same sample . Then in Section 4 we give more detailed guidelines for choosing the .

Recall that, once the basis functions have been selected, the optimal coefficient vector was expressed in (14) as, where , . But, in view of equation (19) derived above, the entries can also be written,

This indicates that has the structure of a covariance matrix and, in particular, it suggests that should be positive semidefinite. Indeed:

Proposition 1. Let denote the covariance matrix of the random variables

where . Then , that is, for all ,

(21) |

Proof. Expanding the right-hand side of (21) yields,

and the result follows upon noting that the second and third terms above are both equal to the fourth. To see this, observe that the second term can be rewritten as,

and similarly for the third term.

Therefore, using Proposition 1 the optimal coefficient vector can also be expressed as,

(22) |

Now assume that the chain is reversible. Writing for the generator of , reversibility is equivalent to the statement that is self-adjoint as a linear operator on the space . In other words,

for any two functions . Our first main theoretical result is that the optimal coefficient vector admits a representation that does not involve the solution of the associated Poisson equation for :

Proposition 2. If the chain is reversible, then the optimal coefficient vector for the control variates , can be expressed as,

(23) |

or, alternatively,

(24) |

where the matrices and are defined in (3) and (21), respectively.

Proof. Let denote the centered version of , and recall that solves Poisson’s equation for , so . Therefore, using the fact that is self-adjoint on each component of ,

Combining this with (14) and (22), respectively, proves the two claims of the proposition.

The expression (24) suggests estimating via,

(25) |

where the empirical matrix is defined by,

(26) |

The resulting estimator for based on the vector of control variates and the estimated coefficients is defined as,

(27) |

This will be the main estimator used in the remainder of the paper.

## 4 The Choice of Basis Functions and the Basic Methodology

Given an ergodic, reversible Markov chain with invariant measure , and a function whose mean is to be estimated based on samples from the chain, we wish to find a vector of control variates so that, for an appropriately chosen coefficient vector , the variance of the modified estimates in (11) is significantly lower than that of the simple ergodic averages in (8).

As noted in the Introduction, like with simple Monte Carlo estimation based on i.i.d. samples, there cannot possibly be a single, universal family of control variates that works well in every problem. Indeed, the choice of effective basis functions is important for constructing useful control variates for variance reduction. Nevertheless – and perhaps somewhat surprisingly – as shown next, it is possible to select basis functions that provide very effective variance reduction in a wide range of MCMC estimation scenarios stemming from Bayesian inference problems, primarily those where inference is performed via a conjugate random-scan Gibbs sampler. Examples of this basic methodology are presented in Section 5. Extensions and further examples are discussed in Section 6.

Our starting point is the observation that in order to apply the results developed so far, the MCMC sampler at hand needs to be reversible so that the estimator in (25) for the optimal coefficient vector can be used, and also it is necessary that the one-step expectations of the basis functions should be explicitly computable in closed form. Probably the most natural, general family of MCMC algorithms that satisfy these two requirements is the collection of conjugate random-scan Gibbs samplers, with a target distribution arising as the posterior density of the parameters in a Bayesian inference study. Moreover, since for large sample sizes the posterior is approximately normal – see, e.g., Bickel and Yahav (1969), Blackwell (1985), Bunke and Milhaud (1998), Ibragimov and Hasminskiĭ (1981) – we focus on the case where is a multivariate normal distribution.

According to the discussion in Section 2.2, the main goal in choosing the basis functions is that it should be possible to effectively approximate the solution of the Poisson equation for as a linear combination of the , i.e., . It is somewhat remarkable that, in the case of random-scan Gibbs sampler with a Gaussian target density, the Poisson equation can be solved explicitly, and its solution is of a particularly simple form:

Theorem 1. Let denote the Markov chain constructed from the random-scan Gibbs sampler used to simulate from an arbitrary (nondegenerate) multivariate normal distribution in . If the goal is to estimate the mean of the first component of , then letting for the solution of the Poisson equation for can be expressed as linear combination of the co-ordinate basis functions , , ,

(28) |

Moreover, writing , the coefficient vector in (28) is given by the first row of the matrix , where has entries , , for all , and is always invertible.

Proof. Let denote the candidate solution to the Poisson equation, , and write for a random vector with distribution . Since is nondegenerate, is nonsingular and so the precision matrix exists and its diagonal entries are nonzero. Since the conditional expectation of a component of given the values of the remaining is , we have,

so that,

where we have used the fact that , since the diagonal entries of are all zero. In order for this to be equal to for all , it suffices to choose such that , as claimed. Finally, to see that is nonsingular (and hence invertible), note that its determinant is equal to , which is nonzero since is nonsingular by assumption.

In terms of MCMC estimation, the statement of Theorem 1 can be rephrased as follows: Suppose that samples from a multivariate Gaussian distribution are simulated via a random-scan Gibbs sampler in order to estimate the mean of one of its components. Then, using the linear basis functions to construct a vector of control variates , the modified estimator as in (27) should have dramatically smaller variance than the standard ergodic averages ; in fact, the asymptotic variance of in the central limit theorem is equal to zero. This theoretical prediction is verified in a simple simulation example presented in Section 5.

Thus motivated, we now state the basic version of the variance reduction methodology proposed in this work.

Outline of the Basic Methodology Given: A multivariate posterior distribution A reversible Markov chain with stationary distribution A sample of length from the chain Goal: Estimate the posterior mean of Define: Basis functions as the co-ordinate functions for all for which is computable in closed form The corresponding control variates Estimate: The optimal coefficient vector by as in (25) The quantity of interest by the modified estimators as in (27)

Before examining the performance of this methodology in practice we recall that the Markov chain in Theorem 1 is perhaps the single most complex example of a Markov chain naturally arising in an applied context, for which there is an explicit solution to the Poisson equation.

## 5 MCMC Examples of the Basic Methodology

Here we present three examples of the application of the basic methodology outlined in the previous section. The examples below as well as those in Section 6 are chosen as representative cases covering a broad class of real applications of Bayesian inference.

In Example 1, a bivariate normal density is simulated by random-scan Gibbs sampling. This setting is considered primarily as an illustration of the result of Theorem 1, and also as a simplified version of many examples in the large class of inference studies with an approximately normal posterior distribution. As expected, the variance of the modified estimators in this case is dramatically smaller.

Example 2 considers a case of Bayesian inference via MCMC, with a large hierarchical normal linear model. In part due to the complexity of the model, it is perhaps natural to expect that the posterior is approximately Gaussian, and, indeed, the basic methodology of the previous section is shown to provide very significant variance reduction.

Finally, in Example 3 the use of the basic methodology is illustrated in the case of Metropolis-within-Gibbs sampling from the posterior of a “difficult” model, where the use of heavy-tailed priors results in heavy-tailed posterior densities. Such densities are commonly met in, for example, spatial statistics; see Dellaportas and Roberts (2003) for an illustrative example. Even in this case where the posterior is certainly not approximately normal, it is demonstrated that the basic methodology is still very effective in reducing the estimation variance. This example also illustrates the point that, often, not all basis function can be easily used in the construction of control variates, as indicated in the second part of step of the description of the basic methodology above.

#### Example 1. The bivariate Gaussian through the random-scan Gibbs sampler.

Let be an arbitrary bivariate normal distribution, where, without loss of generality, we take the expected values of both and to be zero and the variance of to be equal to one. Let and the covariance for some . Given arbitrary initial values and , the random-scan Gibbs sampler selects one of the two co-ordinates at random, and either updates by sampling from , or from . Continuing this way produces a reversible Markov chain with distribution converging to . To estimate the expected value of under we set , and define the basis functions and . The corresponding functions and are easily computed as,

The parameter values are chosen as, and , so that the two components are highly correlated and the sampler converges slowly, making the variance of the standard estimates large. Using the samples produced by the resulting chain with initial values , we examine the performance of the modified estimator , and compare it with the performance of the standard ergodic averages .

Figure 1 depicts a typical realization of the sequence of estimates obtained by the two estimators, for simulation steps; the factors by which the variance of is larger than that of are shown in Table 1. In view of Theorem 1, it is not surprising that the estimator is clearly much more effective than .

Variance reduction factors | ||||||
---|---|---|---|---|---|---|

Simulation steps | ||||||

Estimator | ||||||

4.13 | 27.91 | 122.4 | 262.5 | 445.0 | 1196.6 |

In this and in all subsequent examples, the reduction in the variance was computed from independent repetitions of the same experiment: For , different estimates , for , were obtained, and the variance of was estimated by,

where is the average of the