Περίληψη : | In the present thesis we are concerned with appropriate variance reduction methods for specific classes of Markov Chain Monte Carlo (MCMC) algorithms. The variance reduction method of main interest here is that of control variates. More particularly, we focus on control variates of the form U = G−PG, for arbitrary function G, where PG stands for the one-step ahead conditional expectation, that have been proposed by Henderson (1997). A key issue for the efficient implementation of control variates is the appropriate estimation of corresponding coefficients. In the case of Markov chains, this involves the solution of Poisson equation for the function of initial interest, which in most cases is intractable. Dellaportas & Kontoyiannis (2012) have further elaborated on this issue and they have proven optimal results for the case of reversible Markov chains, avoiding that function. In this context, we concentrate on the implementation of those results for Metropolis-Hastings (MH) algorithm, a popular MCMC technique. In the case of MH, the main issue of concern is the assessment of one-step ahead conditional expectations, since these are not usually available in closed form expressions. The main contribution of this thesis is the development and evaluation of appropriate techniques for dealing with the use of the above type of control variates in the MH setting. The basic approach suggested is the use of Monte Carlo method for estimating one-step ahead conditional expectations as empirical means. In the case of MH this is a straightforward task requiring minimum additional analytical effort. However, it is rather computationally demanding and, hence, alternative methods are also suggested. These include importance sampling of the available data resulting from the algorithm (that is, the initially proposed or finally accepted values), additional application of the notion of control variates for the estimation of PG’s, or parallel exploitation of the values that are produced in the frame of an MH algorithm but not included in the resulting Markov chain (hybrid strategy). The ultimate purpose is the establishment of a purely efficient strategy, that is, a strategy where the variance reduction attained overcomes the additional computational cost imposed. The applicability and efficiency of the methods is illustrated through a series of diverse applications.
|
---|