Loading AI tools
Statistical modeling framework From Wikipedia, the free encyclopedia
Dynamic causal modeling (DCM) is a framework for specifying models, fitting them to data and comparing their evidence using Bayesian model comparison. It uses nonlinear state-space models in continuous time, specified using stochastic or ordinary differential equations. DCM was initially developed for testing hypotheses about neural dynamics.[1] In this setting, differential equations describe the interaction of neural populations, which directly or indirectly give rise to functional neuroimaging data e.g., functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG) or electroencephalography (EEG). Parameters in these models quantify the directed influences or effective connectivity among neuronal populations, which are estimated from the data using Bayesian statistical methods.
DCM is typically used to estimate the coupling among brain regions and the changes in coupling due to experimental changes (e.g., time or context). A model of interacting neural populations is specified, with a level of biological detail dependent on the hypotheses and available data. This is coupled with a forward model describing how neural activity gives rise to measured responses. Estimating the generative model identifies the parameters (e.g. connection strengths) from the observed data. Bayesian model comparison is used to compare models based on their evidence, which can then be characterised in terms of parameters.
DCM studies typically involve the following stages:[2]
The key stages are briefly reviewed below.
Functional neuroimaging experiments are typically either task-based or examine brain activity at rest (resting state). In task-based experiments, brain responses are evoked by known deterministic inputs (experimentally controlled stimuli). These experimental variables can change neural activity through direct influences on specific brain regions, such as evoked potentials in the early visual cortex, or via a modulation of coupling among neural populations; for example, the influence of attention. These two types of input - driving and modulatory - are parameterized separately in DCM.[1] To enable efficient estimation of driving and modulatory effects, a 2x2 factorial experimental design is often used - with one factor serving as the driving input and the other as the modulatory input.[2]
Resting state experiments have no experimental manipulations within the period of the neuroimaging recording. Instead, hypotheses are tested about the coupling of endogenous fluctuations in neuronal activity, or in the differences in connectivity between sessions or subjects. The DCM framework includes models and procedures for analysing resting state data, described in the next section.
All models in DCM have the following basic form:
The first equality describes the change in neural activity with respect to time (i.e. ), which cannot be directly observed using non-invasive functional imaging modalities. The evolution of neural activity over time is controlled by a neural function with parameters and experimental inputs . The neural activity in turn causes the timeseries (second equality), which are generated via an observation function with parameters . Additive observation noise completes the observation model. Usually, the neural parameters are of key interest, which for example represent connection strengths that may change under different experimental conditions.
Specifying a DCM requires selecting a neural model and observation model and setting appropriate priors over the parameters; e.g. selecting which connections should be switched on or off.
The neural model in DCM for fMRI is a Taylor approximation that captures the gross causal influences between brain regions and their change due to experimental inputs (see picture). This is coupled with a detailed biophysical model of the generation of the blood oxygen level dependent (BOLD) response and the MRI signal,[1] based on the Balloon model of Buxton et al.,[3] which was supplemented with a model of neurovascular coupling.[4][5] Additions to the neural model have included interactions between excitatory and inhibitory neural populations [6] and non-linear influences of neural populations on the coupling between other populations.[7]
DCM for resting state studies was first introduced in Stochastic DCM,[8] which estimates both neural fluctuations and connectivity parameters in the time domain, using Generalized Filtering. A more efficient scheme for resting state data was subsequently introduced which operates in the frequency domain, called DCM for Cross-Spectral Density (CSD).[9][10] Both of these can be applied to large-scale brain networks by constraining the connectivity parameters based on the functional connectivity.[11][12] Another recent development for resting state analysis is Regression DCM[13] implemented in the Tapas software collection (see Software implementations). Regression DCM operates in the frequency domain, but linearizes the model under certain simplifications, such as having a fixed (canonical) haemodynamic response function. The enables rapid estimation of large-scale brain networks.
DCM for EEG and MEG data use more biologically detailed neural models than fMRI, due to the higher temporal resolution of these measurement techniques. These can be classed into physiological models, which recapitulate neural circuitry, and phenomenological models, which focus on reproducing particular data features. The physiological models can be further subdivided into two classes. Conductance-based models derive from the equivalent circuit representation of the cell membrane developed by Hodgkin and Huxley in the 1950s.[14] Convolution models were introduced by Wilson & Cowan[15] and Freeman [16] in the 1970s and involve a convolution of pre-synaptic input by a synaptic kernel function. Some of the specific models used in DCM are as follows:
Model inversion or estimation is implemented in DCM using variational Bayes under the Laplace assumption.[29] This provides two useful quantities: the log marginal likelihood or model evidence is the probability of observing of the data under a given model. Generally, this cannot be calculated explicitly and is approximated by a quantity called the negative variational free energy , referred to in machine learning as the Evidence Lower Bound (ELBO). Hypotheses are tested by comparing the evidence for different models based on their free energy, a procedure called Bayesian model comparison.
Model estimation also provides estimates of the parameters , for example connection strengths, which maximise the free energy. Where models differ only in their priors, Bayesian Model Reduction can be used to derive the evidence and parameters of nested or reduced models analytically and efficiently.
Neuroimaging studies typically investigate effects that are conserved at the group level, or which differ between subjects. There are two predominant approaches for group-level analysis: random effects Bayesian Model Selection (BMS)[30] and Parametric Empirical Bayes (PEB).[31] Random Effects BMS posits that subjects differ in terms of which model generated their data - e.g. drawing a random subject from the population, there might be a 25% chance that their brain is structured like model 1 and a 75% chance that it is structured like model 2. The analysis pipeline for the BMS approach procedure follows a series of steps:
Alternatively, Parametric Empirical Bayes (PEB) [31] can be used, which specifies a hierarchical model over parameters (e.g., connection strengths). It eschews the notion of different models at the level of individual subjects, and assumes that people differ in the (parametric) strength of connections. The PEB approach models distinct sources of variability in connection strengths across subjects using fixed effects and between-subject variability (random effects). The PEB procedure is as follows:
Developments in DCM have been validated using different approaches:
DCM is a hypothesis-driven approach for investigating the interactions among pre-defined regions of interest. It is not ideally suited for exploratory analyses.[2] Although methods have been implemented for automatically searching over reduced models (Bayesian Model Reduction) and for modelling large-scale brain networks,[12] these methods require an explicit specification of model space. In neuroimaging, approaches such as psychophysiological interaction (PPI) analysis may be more appropriate for exploratory use; especially for discovering key nodes for subsequent DCM analysis.
The variational Bayesian methods used for model estimation in DCM are based on the Laplace assumption, which treats the posterior over parameters as Gaussian. This approximation can fail in the context of highly non-linear models, where local minima may preclude the free energy from serving as a tight bound on log model evidence. Sampling approaches provide the gold standard; however, they are time-consuming and have typically been used to validate the variational approximations in DCM.[40]
DCM is implemented in the Statistical Parametric Mapping software package, which serves as the canonical or reference implementation (http://www.fil.ion.ucl.ac.uk/spm/software/spm12/). It has been re-implemented and developed in the Tapas software collection (https://www.tnu.ethz.ch/en/software/tapas.html Archived 2019-02-03 at the Wayback Machine) and the VBA toolbox (https://mbb-team.github.io/VBA-toolbox/).
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.