Bryce Owen and Faculty Mentor: Brock Kirwan, Psychology
Calculating hippocampal volume from MR images is an essential task in many studies of neurocognition in healthy and diseased populations. The “gold standard” method involves hand tracing, which is accurate but laborious, requiring expertly trained researchers and significant amounts of time. As such, segmenting large datasets with the standard method is impractical. Current automated pipelines are inaccurate at hippocampal demarcation and volumetry. We developed a semi-automated hippocampal segmentation pipeline based on the Advanced Normalization Tools (ANTs) suite of programs to segment the hippocampus. We applied the semi-automated segmentation pipeline to 172 scans (59 female) from groups that included participants diagnosed with autism spectrum disorder, healthy older adults (mean age 67) and healthy younger controls. We found that pipeline performed best when including manually-placed landmarks and when using a template generated from a heterogeneous sample (that included the full variability of group assignments) than a template generated from more homogeneous samples (using only individuals within a given age or neuropsychiatric diagnosis group). Additionally, the semi-automated pipeline required much less time (5 minutes per brain) than manual segmentation (30-60 minutes per brain).
MRI data was obtained from the BYU MRI research facility using a 3T Siemens TIM Trio Scanner. Manual segmentation of scans was done by four researchers, three novice and one expert. All researchers followed the same protocol as outline in Insausti et al. (1998). Inter-rater reliability was calculated using a Dice similarity coefficient (DSC). The DSC scores had a theoretical range of 0 to 1, a score of 1 indicating complete similarity. A new pipeline for semi-automated segmentation was developed from an existing pipeline used in rhesus macaques research (Hunsaker & Amaral, 2014). Landmarks were manually placed within both hippocampi following protocol established by Hunsaker & Amaral. Advanced normalization tools software (ANTs) was then used to render study specific template for participants in all groups. This is in contrast to the homogenous templates used by Hunsaker & Amaral. The heterogeneous template we rendered is believed to be superior to templates based on homogenous samples. The template was then manually landmarked, and each landmark was then warped onto individual scans to calculate total hippocampal volume.
To assess the performance of the semi-automated pipeline, we compared hippocampal volumes obtained with the semi-automated pipeline (SAP) to those obtained with manual segmentation by both novice and expert researchers. Comparison was made using the Dice similarity coefficient. Results showed the expert tracer (R4) had a high similarity with the SAP. Such similarity of performance between SAP and R4 is important to note, especially since the landmarking step of SAP was done by the novice researcher R1 with only an hour of instruction (compared to the several hundred hours of segmentation experience that R4 has).
The purpose of this study was to develop a protocol for hippocampal segmentation that was as accurate as standard methods but faster, easily used, required minimal training, and proved robust in atypical populations. The SAP protocol accomplished this, performing equally well to our expert segmenter and outperformed novice segmenters, while minimizing researcher bias inherent in segmentation.