# Under the hood¶

What is the code doing while it’s running? Let’s take a peek under the hood.

## Parameterization¶

Models in **Bayesian1DEM**, are parameterized as a stack of layers, each of which is assigned a constant value of
bulk resistivity. The layers themselves are defined as sequence of layer interface depths. Due to the diffusive
nature of most EM methods, as well as the wide range of bulk resistivities in Earth materials, it is often best to
invert for the base-10 logarithm of both the layer resistivities and the layer interface depths. **Bayesian1DEM**
omits the interface at z=0 and assumes that the layer underneath the final interface is a halfspace.

Following [Blatter19], **Bayesian1DEM** samples the subsurface using two model regions, each with their own inversion
parameters. This was designed for inversion of marine CSEM data where the properties of the water column are vastly
different than the properties of the subsurface (yet getting the water column resistivity right is crucial). Thus, by
default model region 1 is above model region 2 in depth and represents the water column, while model region 2 represents
the subsurface. **Bayesian1DEM** samples these two regions separately (the relative frequency determined by an input
parameter), but the forward modeling of course uses the full, composite model.

## Markov chains¶

**Bayesian1DEM** uses Markov chains, which are sequences of models that grow iteratively by adding models to the end of the
chain. At each step in the algorithm, **Bayesian1DEM** generates a new model (called the “proposal”) by modifying
the current model in one of four ways:

Adding a new layer interface at a random location

Deleting a current layer interface (chosen at random)

Moving a randomly chosen existing layer interface to a new location (the new location is chosen by a Gaussian distribution whose mean is set to the interface’s current location)

Changing the resistivity of each layer using a Gaussian distribution with mean equal to each layer’s current resistivity value

**Bayesian1DEM** then accepts or rejects the proposal model on the basis of how well it fits the data and
our prior assumptions (its posterior probability) compared to the current model, using the standard Metropolis-Hastings
accept-reject criterion. If the proposal is accepted, it becomes the next model in the chain. If it is rejected,
the current model is copied and becomes the next model in the chain as well. The algorithm then repeats, generating
a new proposal model from the new current model.

## Output: the model ensemble¶

After the chain is sufficiently long such that the statistics of the model parameters become stationary (they aren’t
changing with time) and the initial stage as the chain finds models that fit the data (the “burn-in”) is removed,
the models in the Markov chain represent samples drawn from the Bayesian postrior distribution. In other words, the
statistics of the model parameters across all the models in the chain (the “model ensemble”) represent the Bayesian
posterior uncertainty. Thus, once **Bayesian1DEM** has finished running, the output is a collection of models (saved
in the output .mat files under the variable *s_ll*) that can be mined for all sorts of statistical information: mean,
median, variance, interquartile range, 90% credible interval, etc.

## Flexibility¶

**Bayesian1DEM** is both physical property and data-type invariant. The core of the algorithm knows only that the
“model” is a stack of layers defined by a series of layer interfaces with one or more constant properties ascribed
to each layer. So long as you have a forward modeling code that you can call to generate modeled data given the current
model as defined by **Bayesian1DEM**, you are good to go. This means you can rather easily modify **Bayesian1DEM** to
invert whatever data you want, so long as a trans-dimensional, piecewise-constant stack of layers is an appropriate
model parameterization.