[FSL] [TitleIndex] [WordIndex

How long does it take FEAT to analyse a data-set?

There are many factors that affect how long FEAT takes to analyse a data-set. These include the speed of the machine, the amount of RAM and swap space available, the number of time points, the amount of activation present and the number of voxels in the data. In addition, higher level analyses take longer since first level analyses are carried out in native EPI space whilst higher level analyses are carried out in standard space and use FLAME - a sophisticated Bayesian mixed-effects estimation technique. Hence it is very difficult to give an accurate estimation for the length of time that FEAT should run. As a very rough guide, FEEDS includes (among other tests) a first level analysis with 180 time points and 64 x 64 x 21 voxels which takes less than 30 minutes on a modern PC (Athlon 2GHz) - for other machines see the FEEDS Timing Results. Higher level analyses is slower than this, and can often take 1-3 hours on a modern PC with a standard group size of 6-12 subjects.

Can I use FEAT to analyse fmri data from an animal study?

Analysing animal data in FEAT is, in theory, straightforward although there are some practical difficulties. The basic GLM and statistics are no different for animal data, however difficulties can occur during preprocessing - particularly with motion correction and brain extraction. One reason that problems occur is due to the scales used in motion correction, which starts at 8mm and is often too large for animal brains (e.g. rats). A work-around for this is to modify the voxel size recorded in the Analyze header (using fslchpixdim) so that the total brain size is similar to that of a human (150-200mm in each dimension). Once this is done note that all values entered into FEAT in mm will refer to this expanded image, hence the spatial smoothing should be set taking this into account.

Problems with brain extraction are more serious and for animals with brains that are considerably different from humans (e.g. rats) then BET normally will not work. In such cases it is necessary to turn off brain extraction in the pre-stats and, if necessary, perform it manually or with some other specific software.

Note that we do not supply any standard or template images for animals, and although "Talairach" coordinates will still be reported, they are meaningless. If an animal-specific template image is available it can easily be used as the Standard Space image and all registrations should work correctly once the voxel size change (see above) has been made.

Can I use FEAT to analyse PET data?

It is possible to use FEAT for analysing PET data although the default settings are designed for FMRI data and need modification. In particular: turn on intensity normalisation; turn off slice timing correction; use a higher value for spatial smoothing (8mm or more); turn convolution (with either basis functions or the HRF) off for each EV; do not include temporal derivatives nor high pass filtering (as PET doesn't have the issues of aliasing nor temporal drift); and, similarly, turn pre-whitening off (no FILM) because PET also doesn't have the problem of temporal autocorrelation. Note that TR is meaningless for PET so the default value (or any other value) is fine.

How do I analyse a "sparse-sampling" dataset?

SIMPLE MODELLING

"FULL" MODELLING

sparse2

sparse2

What is FLAME and when do I use it?

FLAME (FMRIB's Local Analysis of Mixed Effects) is a sophisticated Bayesian estimation method used for higher-level mixed-effects analysis. It is recommended that FLAME is always used for higher-level analysis as it provides the most accurate statistics available in FEAT.

FLAME uses MH MCMC (Metropolis-Hastings Markov Chain Monte Carlo) sampling methods to generate the distribution for the higher-level contrasts of parameter estimates (copes), and then fits a general t-distribution to this. In addition, it incorporates knowledge of the first-level results, particularly the variances in order to avoid the "negative variance problem" (where the estimated mixed effects variance is less than the first level variance implying negative random effects variance). See the FEAT Manual or relevant publications for more details about higher-level modelling, analysis and the use of FLAME.

My FEAT run doesn't finish - what do I do?

It can take FEAT a long time to analyse large data sets, as discussed above. Some progress should be apparent by using FeatWatcher or by inspecting the report.log file. In addition, by using top the currently active process run by FEAT should be apparent. After pre-stats most of the time is spent running film or flame and these should have current running status if things are working correctly. If the run does not seem to be active or it does not finish after an appropriate length of time then there may be problems - refer to the "file not found" question for more details on debugging FEAT sessions.

Also note that the FEAT GUI does not disappear once FEAT is finished but remains so that other FEAT runs can be started. To see if FEAT is finished, the best method is to load the report.html web page inside the output .feat or .gfeat directory.

A file not found error occurs during my FEAT run - what do I do?

Errors of this type usually arise as a result of previous stages (such as certain steps in the pre-stats) failing. This can occur due to lack of disk space to write output files, or insufficient swap space to run the necessary programs. The first thing to try once problems with disk space and swap space are ruled out is to re-run the FEAT analysis and see if the same problem occurs. If it does, check the report.log file in the .feat or .gfeat output directory to see if any programs reported error messages. If an error is found then check whether that command can be run at the command line. If not, refer to debugging for that command. If the individual command can be run but not within FEAT, try setting up the entire design again from the beginning and re-running the analysis. If all else fails, email the FSL email list with details of the analysis, and attach both the report.log and design.fsf files.

Can I ignore time points or use a dummy EV to model missing values in FEAT?

Yes - time points can be effectively "ignored" by creating a dummy EV (a confound) which has a value of one for the time point to be ignored and zero everywhere else. This can be created either as a custom input file (1 or 3 column) or using the Square wave input option and correctly setting the Skip and Stop after fields. Filtering should be applied as normal, but no convolution or temporal derivative should be set for this dummy EV.

FEAT says my design is rank deficient - why?

Rank Deficiency refers to the case when a combination of the EVs is equal to (or close to) zero. This often occurs in very large design matrices with temporal derivatives, as certain EVs are effectively the same as a combination of other EVs, meaning that their parameter estimates (strengths) cannot be uniquely determined. The default threshold for the rank deficiency test in FEAT is quite conservative and often the analysis can be performed successfully without problems even when the rank deficiency warning occurs (especially for ratios more than 10e-4). However, whenever the warning occurs the design matrix should be examined, together with the matrices that depict correlation and eigenvalues (see FSL Course Slides or the FEAT Manual for some more information). High correlation between semantically distinct EVs (shown as light values off the diagonal in the correlation matrix) is an indication that a real problem exists in estimating parameters of the specified design and such cases need to be assessed individually. Note that in the first level analysis in FEAT all EVs are demeaned and so combinations of EVs which add up to a constant level (through time) before demeaning will end up as zero and hence be rank deficient. For higher levels the EVs are not demeaned and so it is possible to have EVs that add up to a constant, non-zero, regressor without problems of rank deficiency.

What contrast should I use to get ...?

Contrasts are used to formulate statistical questions related to the particular EVs used in an experiment. Consequently the construction of contrasts varies greatly depending on the particular experiment and question to be asked. Some standard t-contrasts exist, such as [1 0 ... 0] which asks the question "when is the first EV's parameter estimate (PE) significantly greater than zero?", and similarly for [0 1 0 ... 0] for the second PE and so forth. Another common contrast is [1 -1 0 0 ... 0] which asks: "when is the first PE significantly greater than the second PE?". As all t-contrasts are thresholded looking for positive t values, the previous questions refer to "greater than" and not "less than". In order to ask "less than" questions, all that needs to be done is to reverse the signs in the previous contrasts. For more information on t-contrasts and on f-contrasts, refer to the FEAT Manual or to any standard reference on statistics and the General Linear Model (GLM).

What's the right terminology for "sessions", "runs" etc?

Of course - there's no "right" answer, but for consistency, let's suggest the following:

Thus a subject goes in the scanner for a study, during which several sessions take place. Each FMRI session is made up of a continuous run of images. In the case of a block-design paradigm, the images are grouped into blocks of constant stimulus state.

What does orthogonalisation of EVs mean, and when do I use it?

Orthogonalisation is a process of modifying an EV so that it does not share any common signal with the other EVs present. Technically, the vectors are altered so that they have zero dot product (i.e. are orthogonal). When applying orthogonalisation in FEAT it alters the current EV to be orthogonal to the specified EVs. This means that any signal which this EV shared with the other EVs is, after orthogonalisation, attributed solely to these other EVs. Orthogonalisation should therefore be very sparingly used, and only in situations where it is known a-priori that the enforced attribution of signal is justified scientifically. In general it is better to avoid orthogonalisation and allow the GLM to apply the conservative approach when there is shared signal (which is only to produce significant results based on the unique components of the signal, not the shared ones). For more information on orthogonality see the FEAT Manual or the FSL Course Slides.

When do you add temporal derivatives and what are they for?

Temporal derivatives are used to allow the model to fit even when the timing is not exactly correct (e.g. the response is slightly before or after the specified timing). This is useful in compensating for differences between the actual and modelled HRF (Haemodynamic Response Function) as it is fitted on a per-voxel basis and so can also account for regional differences in HRF. Another common way to account for HRF differences is to use basis function sets which perform the same function although they usually also allow for substantial changes in the HRF shape as well as its timing. Technically the use of temporal derivatives is an instance of basis functions and hence the theory for their estimation is identical.

Are the results obtained using temporal derivatives biased?

There is no bias in the null distribution for the results from using temporal derivatives, and therefore all the statistics are valid. However, the estimation of the effect size does suffer from some bias due to the sampling of the HRF, which is not overcome by temporal derivatives. This results in the higher-level analysis having different sensitivity in different regions. Such different sensitivity is normal in fMRI and is also caused by varying SNR with coil arrays, physiological noise effects, susceptibility-induced distortions and signal loss, to name only a few causes.

It is possible to form non-standard statistics (e.g. peak values, RMS of EV combination, etc.) that can reduce the estimation bias, although they require specialised inference. This is the same methodology used for general HRF basis functions (since temporal derivatives are just a special case of basis functions). See the section on Group Level Basis Functions for more information.

What are the red lines in the registration results?

The red lines are edges from one image overlaid on top of the usual grey-scale view of the other image. This is used to assess the registration quality - a good registration should align the red lines with the structural boundaries (major changes in grey-level) of the other image. If there is substantial visible mis-alignment (e.g. in the ventricle boundaries) then alternative settings of the registration should be tried to improve the registration.

Can I use my own (study-specific) template image instead of the avg152?

Yes - simply replace the standard image (avg152T1_brain) in the Registration tab with your own image. NB: make sure that the template and input images have the same left/right convention.

Are the mm coordinates reported by FEAT in MNI space or Talariach space?

Technically they are in MNI space although most of the documentation, as well as many publications, refer to it still as "Talairach" space.

How can I insert a custom registration into a FEAT analysis?

If you want to run any custom registrations outside of FEAT then you should do the following in order to re-generate the FEAT registration images and co-ordinate tables:

How do I run FEAT on single-slice data?

The current version of FEAT works with single-slice data in the same way as normal multi-slice data. It does not require any special settings.

How do I run higher-level FEAT when some inputs are first-level and some are higher-level?

For example, if you have some subjects where you have just a single session, and some where you have a couple, and you want to do a multi-subject higher-level analysis:

How are motion parameters and other confound EV files processed with respect to filtering?

All EVs will be filtered to match the processing applied to the input data

Can I use an old design.fsf - from a previous version of FSL?

We try to make things backwards compatible, so hopefully this should be possible - let us know if there are problems. You should always pass an old design.fsf through the GUI (i.e., load it into the GUI before running or saving), rather than trying to run it (e.g., with the feat script) directly from the command line.

If you load an old setup into FEAT, you should have a look through all the GUI sections, to check that things seem to be setup correctly.


CategoryFAQ CategoryFEAT


2014-07-08 15:33