class visbrain.objects.GridSignalsObj(name, data, axis=-1, plt_as='grid', n_signals=10, lw=2.0, color='white', title=None, title_size=10, title_bold=True, title_visible=True, decimate='auto', transform=None, parent=None, verbose=None)[source][source]

Take a VisPy visual and turn it into a compatible Visbrain object.

name : string

The name of the GridSignals object.

data : array_like

The data to plot. The following types are supported :

  • NumPy array : a 1D, 2D or 3D array
  • mne.Epochs
axis : int | -1

Location of the time axis.

plt_as : {‘grid’, ‘row’, ‘col’}

Plotting type. By default data is presented as a grid. Use :

  • ‘grid’ : plot data as a grid of signals.
  • ‘row’ : plot data as a single row. Only horizontal camera movements are permitted
  • ‘col’ : plot data as a single column. Only vertical camera movements are permitted
n_signals : int | 10

Number of signals to display if plt_as is row or col.

lw : float | 2.

Line width.

color : string, list, tuple | ‘white’

Line color.

title : list | None

List of strings describing the title of each element. The length of this list depends on the shape of the provided data.

  • 1d = (n_times,) : len(title) = 1
  • 2d = (n_rows, n_times) : len(title) = n_rows
  • 3d = (n_rows, n_cols, n_times) : len(title) = n_rows * n_cols

If an MNE-Python object is passed, titles are automatically inferred.

title_size : float | 10.

Size of the title text.

title_bold : bool | True

Specify if titles should be bold or not.

title_visible : bool | True

Specify if titles should be displayed.

decimate : string, bool, int | ‘auto’

Depending on your system, plotting a too large number of signals can possibly fail. To fix this issue, there’s a limited number of points of (20 million) and if your data exceeds this number of points, data is decimated along the time axis. Use :

  • ‘auto’ : automatically find the most appropriate decimation factor
  • int : use a specific decimation ratio (e.g 2, 3 etc)
  • False : if you don’t want to decimate
transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Hypnogram object parent.

verbose : string

Verbosity level.


__init__(name, data[, axis, plt_as, …]) Init.
animate([step, interval, iterations]) Animate the object.
copy() Get a copy of the object.
describe_tree() Tree description.
preview([bgcolor, axis, xyz, show, obj, …]) Previsualize the result.
record_animation(name[, n_pic, bgcolor]) Record an animated object and save as a *.gif file.
render() Render the canvas.
screenshot(saveas[, print_size, dpi, unit, …]) Take a screeshot of the scene.
set_shortcuts_to_canvas(canvas) Set shortcuts to a VisbrainCanvas.
to_dict() Return a dictionary of all colorbar args.
to_kwargs([addisminmax]) Return a dictionary for input arguments.
update() Fonction to run when an update is needed.
update_from_dict(kwargs) Update attributes from a dictionary.
animate(step=1.0, interval='auto', iterations=-1)[source]

Animate the object.

Note that this method can only be used with 3D objects.

step : float | 1.

Rotation step.

interval : float | ‘auto’

Time between events in seconds. The default is ‘auto’, which attempts to find the interval that matches the refresh rate of the current monitor. Currently this is simply 1/60.

iterations : int | -1

Number of iterations. Can be -1 for infinite.


Get the cmap value.


Get a copy of the object.


Get the data_folder value.


Get the name value.


Get the parent value.

preview(bgcolor='black', axis=False, xyz=False, show=True, obj=None, size=(1200, 800), mpl=False, **kwargs)[source]

Previsualize the result.

bgcolor : array_like/string/tuple | ‘black’

Background color for the preview.

axis : bool | False

Add x and y axis with ticks.

xyz : bool | False

Add an (x, y, z) axis to the scene.

obj : VisbrainObj | None

Pass a Visbrain object if you want to use the camera of an other object.

size : tuple | (1200, 800)

Default size of the window.

mpl : bool | False

Use Matplotlib to display the object. This result in a non interactive figure.

kwargs : dict | {}

Optional arguments are passed to the VisbrainCanvas class.

record_animation(name, n_pic=10, bgcolor=None)[source]

Record an animated object and save as a *.gif file.

Note that this method :

  • Can only be used with 3D objects.
  • Requires the python package imageio
name : string

Name of the gif file (e.g ‘myfile.gif’)

n_pic : int | 10

Number of pictures to use to render the gif.

bgcolor : string, tuple, list | None

Background color.


Render the canvas.

img : array_like

Array of shape (n_rows, n_columns, 4) where 4 describes the RGBA components.

screenshot(saveas, print_size=None, dpi=300.0, unit='centimeter', factor=None, region=None, autocrop=False, bgcolor=None, transparent=False, obj=None, line_width=1.0, **kwargs)[source]

Take a screeshot of the scene.

By default, the rendered canvas will have the size of your screen. The screenshot() method provides two ways to increase to exported image resolution :

  • Using print_size, unit and dpi inputs : specify the size of the image at a specific dpi level. For example, you might want to have an (10cm, 15cm) image at 300 dpi.
  • Using the factor input : multiply the default image size by this factor. For example, if you have a (1920, 1080) monitor and if factor is 2, the exported image should have a shape of (3840, 2160) pixels.
saveas : str

The name of the file to be saved. This file must contains a extension like .png, .tiff, .jpg…

print_size : tuple | None

The desired print size. This argument should be used in association with the dpi and unit inputs. print_size describe should be a tuple of two floats describing (width, height) of the exported image for a specific dpi level. The final image might not have the exact desired size but will try instead to find a compromize regarding to the proportion of width/height of the original image.

dpi : float | 300.

Dots per inch for printing the image.

unit : {‘centimeter’, ‘millimeter’, ‘pixel’, ‘inch’}

Unit of the printed size.

factor : float | None

If you don’t want to use the print_size input, factor simply multiply the resolution of your screen.

region : tuple | None

Select a specific region. Must be a tuple of four integers each one describing (x_start, y_start, width, height).

autocrop : bool | False

Automaticaly crop the figure in order to have the smallest space between the brain and the border of the picture.

bgcolor : array_like/string | None

The background color of the image.

transparent : bool | False

Specify if the exported figure have to contains a transparent background.

obj : VisbrainObj | None

Pass a Visbrain object if you want to use the camera of an other object for the sceen rendering.

kwargs : dict | {}

Optional arguments are passed to the VisbrainCanvas class.


Get the transform value.


Get the visible_obj value.

Examples using visbrain.objects.GridSignalsObj