Back to Top

We are fortunate in DBS surgery that we can acquire large volumes of clinical data to assess our surgical outcomes robustly. Below are some analyses to show what can be done quickly and efficiently (with a little bit of coding). By doing this, I hope it will be easier to make use of these data, either for audit or research purposes. Also, it's also a really nice way to get into Python and see all the potential it has for data analysis (and more). I've used Jupyter Notebook, Plotly, RainCloudPlots, and seaborn for these examples, all of which are awesome.

To get started, just download the data from my Github repository. It uses the terrific Jupyter Notebook so you can literally just click through the examples below. I've set it up to run using the randomised data provided, so the actual results of the analysis shouldn't be interpreted. I've also included a nice example excel file formatted in a way to make data entry straightforward. To analyse your own data, just add it to the spreadsheet. Have fun :)

To start we perform some exploratory data analysis. This first example is a plotmatrix of all outcome variables. It can be used to gauge whether there is any covariance / redundancy between the outcome variables, in which case one may consider removing them from subsequent modelling. You also get a histogram of the data spread along the diagonal.

The next analysis plots the electrode contacts. This uses a violin plot which is really helpful for showing the boxplot, data distribution, and raw data too. Note that the closest contact to the target nucleus is used. I've compared targetting between two different nuclei, as well as accuracy with the overall nucleus and specific motor component.

Now we move on to assessing change in outcomes before and after surgery. Similar to the violin plots above, RainCloudPlots are tremendously useful in not only showing a boxplot but also data distribution and raw data. With this we can assess not only the change in summary statistics but inspect how the raw data may change too.

So far we have assessed the raw data for our outcomes of interest, checked if they are independent, and looked at our electrode accuracy. Next, we may wish to inspect how our outcomes related to electrode placement. Here is a simple linear regression analysis. The distributions for the dependent and independent variables are displayed, as is the regression line with fit and raw data. A lot of information on one simple plot, but it's necessary for accurate analysis visualisation.

Finally, we may wish to assess how multiple variables affect our outcome of interest. Here we use multiple regression to assess how motor outcome is affected by accuracy for different targets.

To say I'm a big Matlab fan would be an understatement. Fortunately, functional neurosurgery affords many opportunities for using Matlab, and in return Matlab can offer some exciting rewards for functional neurosurgery. Here are some examples based off the terrific Lead-DBS toolbox (also written in Matlab).

First, making "Fusion Reports" (post-op assessment of DBS accuracy) is a routine part of DBS clinical practice, both for audit and patient management. Here is a simple way to use some of the interesting data with Lead-DBS - electrode contact distances and 2D locations - to efficiently make a pdf report. It's also a great way to get into using LaTeX. Why not just use another word processor? Using LaTeX makes organising all the images really simple, the whole pipeline works automatically (only a few inputs e.g. patient name are required), and it produces a professional pdf that is also nice and small. To run this, open your Lead-DBS analysis in Matlab then run the script lead_distances_latex.m (select your target inside this script). This generates the text report you then copy into the relevant part of fusionreport_latex.tex file. Here's one I made earlier:

    %LEAD_DISTANCES_LATEX Script to extract data on lead accuracy
    %   Provides data for latex reports.
    %   Usage: define target,      STN, GPI, VIM
    %   Outputs: .mat & .csv file of distances + plots & coords
    %   NB: needs ea.stats open
    %   NNB: set for atlas Distal (medium)
    % Michael Hart, University of British Columbia, August 2020


In a similar mind, it's also good to include location accuracy data in a surgical logbook or database. Try this simple script to do the job:

    %LEAD_LOGBOOK Report on electrode accuracy (e.g. for a surgical logbook)
    %   Usage:  define target,          STN, GPI, VIM
    %           define "in" distance,   distance considered inside the target
    %   Outputs: .mat & .csv file of distances + plots & coords
    %   NB: run from patient directory - needs ea.stats open
    %   NNB: all atlases are Distal (medium)
    % Michael Hart, University of British Columbia, November 2020


Moving on, if you have group data, you may wish to look for trends in accuracy (e.g. Is the second side for implantation more variable in terms of accuracy due to brainshift? Or is their a systematic error with the frame?). This makes a type of heatmap to inspect these data more easily.

    % Script for analysing electrode targetting errors
    % Just set group directory & target below
    % NB: set for distal medium atlas
    % NNB: saves & returns to group directory
    % Michael Hart, University of British Columbia, December 2020


Some centres have a significant volume of single lead implantations. Here is a way to incorporate these into your Lead-DBS workflow.

    %LEAD_FLIPPER Duplicates leads for viewing single side results as a group
    %   Usage: subject to duplicate,      absolute path
    %   Outputs: ea_reconstruction.mat in new lead_flipped folder within working directory
    %   NB: set for Medtronic 3389
    %   NNB: code based on discussion here
    %   []
    % Michael Hart, University of British Columbia, November 2020


Finally, the group analysis I showed above (in Python) can also be done in Matlab. I'd emphasise that it's not the programming language that makes the difference here, but rather the data and thought that goes into the analysis that counts (although each comes with some pretty neat features).

We can start by analysing the outcomes with plotmatrix & rainclouds.

Then we can move on to analysing electrode accuracy. Here are two styles of plots comparing sides, targets, distances, and volumes of activated tissue (VATs).

Finally, we can compare outcomes with accuracy. Don't forget to correct for multiple comparisons!

Diffusion imaging and tractography have made invaluable contributions to our understanding of the brain in a relatively short time. Fortunately, it's not that difficult to get some basic diffusion imaging on clinical MRI scanners. I wanted to help inspire people to look at these data by showing what can be done and providing the means to do it. Hopefully, this will be a stepping stone to some collaboration between clinical centres, both in terms of code and sequence development. Note that this isn't meant to be a definitive guide to doing DBS with tractography (if there's such a thing), and the sequences are on purpose designed to be simple (and hence mirror what is achievable clinically). Nevertheless, I hope it's fun :)

All the code is here via Github.


    (c) Michael Hart, University of British Columbia, August 2020

    Function to run tractography on clinical DBS data (e.g. from UBC Functional Neurosurgery Programme)

    Based on the following data: GE scanner, 3 Tesla, 32 Direction DTI protocol

    Example: --data=diffusion.nii.gz --T1=mprage.nii.gz --bvecs=bvecs.txt --bvals=bvals.txt


    --data          diffusion data (e.g. standard = single B0 as first volume)
    --T1            structural (T1) image
    --bvecs         bvecs file
    --bvals         bvals file

    --acqparams     acquisition parameters (custom values, for Eddy/TopUp)
    --index         diffusion PE directions (custom values, for Eddy/TopUp)
    --segmentation  additional segmentation template (for segmentation: standard is HarvardOxford)
    --parcellation  additional parcellation template (for connectomics: standard is HarvardOxford)
    -d              runs topup & eddy (see code for default acqparams/index parameters or enter custom as above)
    -p              parallel processing (slurm)*
    -o              overwrite
    -h              show this help
    -v              verbose

    1.  Baseline quality control
    2.  FSL_anat*
    3.  Freesurfer*
    4.  De-noising with topup & eddy - optional (see code)
    5.  FDT pipeline
    6.  BedPostX
    7.  Registration
    8.  XTRACT (including custom DBS tracts)
    9.  Segmentation (probtrackx2)
    10. Connectomics (probtrackx2)

    Version:    1.0

    History:    original

    NB: requires Matlab, Freesurfer, FSL, ANTs, and set path to codedir
    NNB: SGE / GPU acceleration - change eddy, bedpostx, probtrackx2, and XTRACT calls