Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Getting Started with Nobrainer

Nobrainer is a deep learning framework for neuroimaging built on PyTorch. This tutorial covers installation verification, checking hardware availability, listing built-in models, and importing key processing modules.

PRE_RELEASE = False
import subprocess, sys
try:
    import google.colab  # noqa: F401
    cmd = [sys.executable, "-m", "pip", "install", "-q",
           "nobrainer", "nilearn", "matplotlib"]
    if PRE_RELEASE:
        cmd.insert(4, "--pre")
    subprocess.check_call(cmd)
except ImportError:
    pass

1. Import nobrainer and check the version

import nobrainer

print("nobrainer version:", nobrainer.__version__)

2. Check PyTorch and CUDA availability

Nobrainer uses PyTorch as its backend. Training is faster on GPU, but all tutorials work on CPU as well.

import torch

print("PyTorch version:", torch.__version__)
print("CUDA available:", torch.cuda.is_available())
if torch.cuda.is_available():
    print("CUDA device:", torch.cuda.get_device_name(0))
    print("GPU count:", torch.cuda.device_count())
else:
    print("Running on CPU (this is fine for tutorials)")

3. List available models

Nobrainer ships with several model architectures for segmentation, generation, and self-supervised learning. Some models require optional dependencies (pyro-ppl for Bayesian models, pytorch-lightning for generative models).

from nobrainer.models import available_models, list_available_models

print("Available models:")
list_available_models()

You can retrieve a model factory by name:

from nobrainer.models import get as get_model

unet_factory = get_model("unet")
print("UNet factory:", unet_factory)

4. Explore processing imports

The nobrainer.processing module provides the high-level sklearn-style API.

from nobrainer.processing.segmentation import Segmentation
from nobrainer.processing.generation import Generation
from nobrainer.processing.dataset import Dataset, extract_patches

print("Segmentation:", Segmentation)
print("Generation:", Generation)
print("Dataset:", Dataset)
print("extract_patches:", extract_patches)

5. Check optional dependencies

Some features require optional packages. This cell reports what is available.

optional_deps = {
    "nibabel": "NIfTI I/O",
    "nilearn": "Neuroimaging visualization",
    "scipy": "Image processing utilities",
    "pyro": "Bayesian models (pyro-ppl)",
    "pytorch_lightning": "Generative model training",
    "zarr": "Zarr v3 data pipeline",
}

for mod, description in optional_deps.items():
    try:
        __import__(mod)
        print(f"  [OK] {mod} -- {description}")
    except ImportError:
        print(f"  [--] {mod} -- {description} (not installed)")

Summary

You now have a working nobrainer installation. In the next tutorial we will download sample brain data and begin exploring it.