visbrain.objects.BrainObj

class visbrain.objects.BrainObj(name, vertices=None, faces=None, normals=None, lr_index=None, hemisphere='both', translucent=True, sulcus=False, invert_normals=False, transform=None, parent=None, verbose=None, _scale=1.0, **kw)[source][source]

Create a brain object.

Parameters:
name : string

Name of the brain object. If brain is ‘B1’ or ‘B2’ or ‘B3’ use a default brain template. If name is ‘white’, ‘inflated’ or ‘sphere’ download the template (if needed). Otherwise, at least vertices and faces must be defined.

vertices : array_like | None

Mesh vertices to use for the brain. Must be an array of shape (n_vertices, 3).

faces : array_like | None

Mesh faces of shape (n_faces, 3).

normals : array_like | None

Normals to each vertex. If None, the program will try to compute it. Must be an array with the same shape as vertices.

lr_index : array_like | None

Left / Right index for hemispheres. Must be a vector of length n_vertices. This vector must be a boolean array where True refer to vertices that belong to the left hemisphere and False, the right hemisphere.

hemisphere : {‘left’, ‘both’, ‘right’}

The hemisphere to plot.

translucent : bool | True

Use translucent (True) or opaque (False) brain.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Brain object parent.

verbose : string

Verbosity level.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Notes

List of supported shortcuts :

  • s : save the figure
  • <delete> : reset camera

Examples

>>> from visbrain.objects import BrainObj
>>> b = BrainObj('white', hemisphere='right', translucent=False)
>>> b.preview(axis=True)

Methods

__init__(name[, vertices, faces, normals, …]) Init.
add_activation([data, vertices, …]) Add activation to the brain template.
clean() Clean brain object.
describe_tree() Tree description.
get_parcellates(file) Get the list of supported parcellates names and index.
list([file]) Get the list of all installed templates.
parcellize(file[, select, hemisphere, data, …]) Parcellize the brain surface using a .annot file.
preview([bgcolor, axis, xyz, show, obj, size]) Previsualize the result.
project_sources(s_obj[, project, radius, …]) Project source’s activity or repartition onto the brain object.
remove() Remove a brain template.
reset_camera() Reset the camera.
rotate([fixed, scale_factor, custom, margin]) Rotate the brain using predefined rotations or a custom one.
save([tmpfile]) Save the brain template (if not already saved).
screenshot(saveas[, print_size, dpi, unit, …]) Take a screeshot of the scene.
set_data([name, vertices, faces, normals, …]) Load a brain template.
set_shortcuts_to_canvas(canvas) Set shortcuts to a VisbrainCanvas.
to_dict() Return a dictionary of all colorbar args.
to_kwargs([addisminmax]) Return a dictionary for input arguments.
update() Fonction to run when an update is needed.
update_from_dict(kwargs) Update attributes from a dictionary.
add_activation(data=None, vertices=None, smoothing_steps=20, file=None, hemisphere=None, hide_under=None, n_contours=None, cmap='viridis', clim=None, vmin=None, vmax=None, under='gray', over='red')[source][source]

Add activation to the brain template.

This method can be used for :

  • Add activations to specific vertices (data and vertices)
  • Add an overlay (file input)
Parameters:
data : array_like | None

Vector array of data of shape (n_data,).

vertices : array_like | None

Vector array of vertices of shape (n_vtx). Must be an array of integers.

smoothing_steps : int | 20

Number of smoothing steps (smoothing is used if n_data < n_vtx)

file : string | None

Full path to the overlay file.

hemisphrere : {None, ‘both’, ‘left’, ‘right’}

The hemisphere to use to add the overlay. If None, the method try to inferred the hemisphere from the file name.

hide_under : float | None

Hide activations under a certain threshold.

n_contours : int | None

Display activations as contour.

cmap : string | ‘viridis’

The colormap to use.

clim : tuple | None

The colorbar limits. If None, (data.min(), data.max()) will be used instead.

vmin : float | None

Minimum threshold.

vmax : float | None

Maximum threshold.

under : string/tuple/array_like | ‘gray’

The color to use for values under vmin.

over : string/tuple/array_like | ‘red’

The color to use for values over vmax.

alpha

Get the alpha value.

camera

Get the camera value.

clean()[source][source]

Clean brain object.

cmap

Get the cmap value.

faces

Get the faces value.

get_parcellates(file)[source][source]

Get the list of supported parcellates names and index.

This method require the pandas and nibabel packages to be installed.

Parameters:
file : string

Path to the .annot file.

hemisphere

Get the hemisphere value.

list(file=None)[source][source]

Get the list of all installed templates.

name

Get the name value.

normals

Get the normals value.

parcellize(file, select=None, hemisphere=None, data=None, cmap='viridis', clim=None, vmin=None, under='gray', vmax=None, over='red')[source][source]

Parcellize the brain surface using a .annot file.

This method require the nibabel package to be installed.

Parameters:
file : string

Path to the .annot file.

select : array_like | None

Select the structures to display. Use either a list a index or a list of structure’s names. If None, all structures are displayed.

hemisphere : string | None

The hemisphere for the parcellation. If None, the hemisphere will be inferred from file name.

cmap : string | ‘viridis’

The colormap to use.

clim : tuple | None

The colorbar limits. If None, (data.min(), data.max()) will be used instead.

vmin : float | None

Minimum threshold.

vmax : float | None

Maximum threshold.

under : string/tuple/array_like | ‘gray’

The color to use for values under vmin.

over : string/tuple/array_like | ‘red’

The color to use for values over vmax.

parent

Get the parent value.

preview(bgcolor='black', axis=False, xyz=False, show=True, obj=None, size=(1200, 800), **kwargs)[source]

Previsualize the result.

Parameters:
bgcolor : array_like/string/tuple | ‘black’

Background color for the preview.

axis : bool | False

Add x and y axis with ticks.

xyz : bool | False

Add an (x, y, z) axis to the scene.

obj : VisbrainObj | None

Pass a Visbrain object if you want to use the camera of an other object.

size : tuple | (1200, 800)

Default size of the window.

kwargs : dict | {}

Optional arguments are passed to the VisbrainCanvas class.

project_sources(s_obj, project='modulation', radius=10.0, contribute=False, cmap='viridis', clim=None, vmin=None, under='black', vmax=None, over='red', mask_color=None)[source][source]

Project source’s activity or repartition onto the brain object.

Parameters:
s_obj : SourceObj

The source object to project.

project : {‘modulation’, ‘repartition’}

Project either the source’s data (‘modulation’) or get the number of contributing sources per vertex (‘repartition’).

radius : float

The radius under which activity is projected on vertices.

contribute: bool | False

Specify if sources contribute on both hemisphere.

cmap : string | ‘viridis’

The colormap to use.

clim : tuple | None

The colorbar limits. If None, (data.min(), data.max()) will be used instead.

vmin : float | None

Minimum threshold.

vmax : float | None

Maximum threshold.

under : string/tuple/array_like | ‘gray’

The color to use for values under vmin.

over : string/tuple/array_like | ‘red’

The color to use for values over vmax.

mask_color : string/tuple/array_like | ‘gray’

The color to use for the projection of masked sources. If None, the color of the masked sources is going to be used.

remove()[source][source]

Remove a brain template.

reset_camera()[source][source]

Reset the camera.

rotate(fixed=None, scale_factor=None, custom=None, margin=1.08)[source][source]

Rotate the brain using predefined rotations or a custom one.

Parameters:
fixed : str | ‘top’

Use a fixed rotation :

  • Top view : ‘axial_0’, ‘top’
  • Bottom view : ‘axial_1’, ‘bottom’
  • Left : ‘sagittal_0’, ‘left’
  • Right : ‘sagittal_1’, ‘right’
  • Front : ‘coronal_0’, ‘front’
  • Back : ‘coronal_1’, ‘back’
custom : tuple | None

Custom rotation. This parameter must be a tuple of two floats respectively describing the (azimuth, elevation).

save(tmpfile=False)[source][source]

Save the brain template (if not already saved).

scale

Get the scale value.

screenshot(saveas, print_size=None, dpi=300.0, unit='centimeter', factor=None, region=None, autocrop=False, bgcolor=None, transparent=False, obj=None, line_width=1.0, **kwargs)[source]

Take a screeshot of the scene.

By default, the rendered canvas will have the size of your screen. The screenshot() method provides two ways to increase to exported image resolution :

  • Using print_size, unit and dpi inputs : specify the size of the image at a specific dpi level. For example, you might want to have an (10cm, 15cm) image at 300 dpi.
  • Using the factor input : multiply the default image size by this factor. For example, if you have a (1920, 1080) monitor and if factor is 2, the exported image should have a shape of (3840, 2160) pixels.
Parameters:
saveas : str

The name of the file to be saved. This file must contains a extension like .png, .tiff, .jpg…

print_size : tuple | None

The desired print size. This argument should be used in association with the dpi and unit inputs. print_size describe should be a tuple of two floats describing (width, height) of the exported image for a specific dpi level. The final image might not have the exact desired size but will try instead to find a compromize regarding to the proportion of width/height of the original image.

dpi : float | 300.

Dots per inch for printing the image.

unit : {‘centimeter’, ‘millimeter’, ‘pixel’, ‘inch’}

Unit of the printed size.

factor : float | None

If you don’t want to use the print_size input, factor simply multiply the resolution of your screen.

region : tuple | None

Select a specific region. Must be a tuple of four integers each one describing (x_start, y_start, width, height).

autocrop : bool | False

Automaticaly crop the figure in order to have the smallest space between the brain and the border of the picture.

bgcolor : array_like/string | None

The background color of the image.

transparent : bool | False

Specify if the exported figure have to contains a transparent background.

obj : VisbrainObj | None

Pass a Visbrain object if you want to use the camera of an other object for the sceen rendering.

kwargs : dict | {}

Optional arguments are passed to the VisbrainCanvas class.

set_data(name=None, vertices=None, faces=None, normals=None, lr_index=None, hemisphere='both', invert_normals=False, sulcus=False)[source][source]

Load a brain template.

transform

Get the transform value.

translucent

Get the translucent value.

vertices

Get the vertices value.

visible_obj

Get the visible_obj value.