Supplementary Information Page
Updated Sept. 20 2000
This page contains some supplementary information for those sites participating in the ICBM project on analysis of non-linear registration algorithms. The goal is to provide answers to brief questions about the validation data. This information is not vital for the project, but may clarify things if questions arise.
1. Project Summary
2. Test Datasets (30 Brains and a Small Subset of Anatomical Models on CD-ROM)
3. Canonical Images used as Registration Targets
4. FAQ (Questions and Answers)
Many algorithms for non-linear registration have been developed by ICBM participants, and several of these are in wide use today. This project aims to compare these algorithms on a sample of 30 datasets, to better understand the criteria for successfully registering MRI data from large human populations.
Each site's algorithms will be tested on the same 30 MRI volumes, 10 from each of 3 ICBM sites. The datasets required for the project are available by CD-ROM (mailed to each participating ICBM site: Sept. 21, 2000), and will shortly be available by FTP as well.
Each subject's data will be aligned with a 3D MRI-based image template (see below for information on templates), using a 3D non-linear deformation field. The deformation fields will be retained and sent to UCLA, where they will be applied to large numbers of anatomical models. These models have already been derived from the 30 image datasets at UCLA. Following the practice of linear registration studies (e.g. West et al., 1998, JCAT), these models are not provided on the CD-ROM. However, a very small number of structure models (for some cortical sulci) are on the CD-ROM. These are provided to help ensure that the image data is reconstructed correctly at each site. This should not be a problem, as information is in the image headers, but in the past there has been a danger that (1) accidental left/right flips could occur in the data, or (2) incorrect coordinate offsets along each axis could go unnoticed. Some anatomical data are therefore provided, in case they are helpful, to check that the coordinate system on the images has been reconstructed correctly. Anatomical models for each subject are supplied as files that contain lists of 3D coordinates in the image data for each subject. (For more information on the data coordinate system, and the datasets, see below).
Each site will calculate their 3D deformation fields, transforming the locations in the native image datasets into the target coordinate system. The 3D deformation fields will be sent by each site to UCLA for analysis and mapping of residual variance (see below for information on analysis of residual variance).
As discussed at the ICBM meeting, the ability of a registration algorithm to reduce anatomical variance depends on the dataset to which the images are being aligned. This image will be referred to as the registration target, and an example would be the commonly-used ICBM305 dataset. To ensure a fair analysis of each algorithm, targets were chosen: (1) that are used in practice; and (2) to allow sites to use a target that will show their algorithm performing optimally. Four targets were selected, requiring four separate experiments, aligning all 30 individual datasets to the:
(1) ICBM305 standard dataset. To ensure that we are all using the same
ICBM305 dataset (versions exist with and without the scalp),
the 'skull-stripped' version of this
dataset will be obtained from the MNI BIC site and placed with the other
templates at the UCLA FTP site for downloading. Details will follow shortly.
(2) MNI Non-Linear Average Dataset, created by Louis Collins and colleagues using ANIMAL (the Montreal registration algorithm). This will be obtained from Louis Collins and put on the UCLA FTP site. This dataset is so-called because non-linear transformations have been applied to a large number of individual datasets before voxel-wise averaging of the data in a common space. This template has crisper features, especially at the cortex, than the ICBM305. Details will follow shortly.
(3) UCLA Non-Linear Average Dataset, created by Roger Woods from ICBM normals, but using high-order polynomial AIR (the UCLA algorithm), rather than ANIMAL (the Montreal Algorithm). Its construction is described in Woods et al., Human Brain Mapping 8(4). Matrix averaging and transform reconciliation are used to create a template with the mean affine shape for the group, as well as better-resolved internal features. It is not identical to the one in the HBM paper, as it was decided that some brains in the ICBM database, used to make the template, might also occur in the test set for this project. A nonlinear average of 12 brains distinct from those in the test set is therefore provided. This dataset has been obtained from Roger Woods at UCLA, and is available on the CD-ROM.
(4) Dusseldorf Template. The single canonical brain dataset used at the Dusseldorf site. This MR dataset represents the brain of a single individual, rather than a population average. This template was recommended by Thorsten Schormann, and is used as a canonical standard at Dusseldorf, in much the same way as a single individual brain is used for the Karolinska Brain Atlas program. This dataset will be obtained from Thorsten Schormann, and made available via the UCLA FTP site. Details will follow shortly.
In summary the steps are: (1) For each template, 30 non-linear registrations are run, and deformation fields are saved. (2) Transformation files containing the deformation fields are returned to UCLA by FTP. (3) Large numbers of anatomical models for each of the datasets are deformed into the target space defined by each of the 4 templates, and residual variance is assessed.
A Note on Algorithm Settings. Each site runs their own non-linear algorithm only. If the algorithm has several modes, the authors are encouraged to use the optimal or default settings. If several default behaviors are usual (e.g. AIR can be run in any mode from 2nd order through 12th order), please just run each of them separately, if you would like to submit more than one set of results.
A Note on Brain Masks. A set of brain masks is provided. These are binary mages in the same coordinate space as the individual image data files. They can be used to remove the scalp tissue, if desired. It was noted that several algorithms require scalp editing, or perform best if the scalp is removed before processing. If this (the scalp editing) was done differently at each site, it could be a confound. It was therefore decided to provide a carefully made mask for each brain, so they could be used consistently at each site.
This results in a total of
30 (datasets) x 4 (registration targets).
registrations at each of the 4 test sites. Each set of registration transform will be sent, by the site doing the registrations, to UCLA, so that they can be applied to the sets of models from the corresponding brain.
Evaluation. Results will be judged by comparing each algorithm's performance with regard to accuracy and speed. Accuracy will be computed using a variety of standard statistical metrics and map, e.g., minimal residual variance computed from large numbers of anatomical models mapped into the target datasets. (More information on variability metrics is provided below).
Information on these datasets has been sent by mail.
Coordinate System of Images and Anatomical Models. All 30 images are supplied in the native 'world' coordinate system, as defined by each site's MR scanner. This was preferred over sending the data in a coordinate space already based on the original Talairach coordinate system, to avoid resampling the images.
Paul Thompson, Ph.D.
|E-MAIL ME| PERSONAL HOMEPAGE| PROJECTS|