7. Objects

Visbrain’s objects are small pieces that can be used to accomplish basic visualizations or can be pass to other Visbrain modules (like Brain).

Here’s the list of currently supported modules :

Each object inherit to the following methods :

describe_tree() Tree description.
preview([bgcolor, axis, xyz, show, obj]) Previsualize the result.
screenshot(saveas[, print_size, dpi, unit, …]) Take a screeshot of the scene.
VisbrainObject.describe_tree()[source][source]

Tree description.

VisbrainObject.preview(bgcolor='white', axis=False, xyz=False, show=True, obj=None, **kwargs)[source][source]

Previsualize the result.

Parameters:

bgcolor : array_like/string/tuple | ‘white’

Background color for the preview.

axis : bool | False

Add x and y axis with ticks.

xyz : bool | False

Add an (x, y, z) axis to the scene.

obj : VisbrainObj | None

Pass a Visbrain object if you want to use the camera of an other object.

kwargs : dict | {}

Optional arguments are passed to the VisbrainCanvas class.

VisbrainObject.screenshot(saveas, print_size=None, dpi=300.0, unit='centimeter', factor=None, region=None, autocrop=False, bgcolor=None, transparent=False, obj=None, line_width=1.0, **kwargs)[source][source]

Take a screeshot of the scene.

By default, the rendered canvas will have the size of your screen. The screenshot() method provides two ways to increase to exported image resolution :

  • Using print_size, unit and dpi inputs : specify the size of the image at a specific dpi level. For example, you might want to have an (10cm, 15cm) image at 300 dpi.
  • Using the factor input : multiply the default image size by this factor. For example, if you have a (1920, 1080) monitor and if factor is 2, the exported image should have a shape of (3840, 2160) pixels.
Parameters:

saveas : str

The name of the file to be saved. This file must contains a extension like .png, .tiff, .jpg…

print_size : tuple | None

The desired print size. This argument should be used in association with the dpi and unit inputs. print_size describe should be a tuple of two floats describing (width, height) of the exported image for a specific dpi level. The final image might not have the exact desired size but will try instead to find a compromize regarding to the proportion of width/height of the original image.

dpi : float | 300.

Dots per inch for printing the image.

unit : {‘centimeter’, ‘millimeter’, ‘pixel’, ‘inch’}

Unit of the printed size.

factor : float | None

If you don’t want to use the print_size input, factor simply multiply the resolution of your screen.

region : tuple | None

Select a specific region. Must be a tuple of four integers each one describing (x_start, y_start, width, height).

autocrop : bool | False

Automaticaly crop the figure in order to have the smallest space between the brain and the border of the picture.

bgcolor : array_like/string | None

The background color of the image.

transparent : bool | False

Specify if the exported figure have to contains a transparent background.

obj : VisbrainObj | None

Pass a Visbrain object if you want to use the camera of an other object for the sceen rendering.

kwargs : dict | {}

Optional arguments are passed to the VisbrainCanvas class.

7.1. Scene object

class visbrain.objects.SceneObj(bgcolor='black', camera_state={}, verbose=None, **kwargs)[source][source]

Create a scene and add objects to it.

Parameters:

bgcolor : string | ‘black’

Background color of the scene.

show : bool | True

Display the canvas.

camera_state : dict | {}

The default camera state to use.

verbose : string

Verbosity level.

Methods

add_to_subplot(obj[, row, col, row_span, …]) Add object to subplot.
link(*args) Link the camera of several objects of the scene.
add_to_subplot(obj, row=0, col=0, row_span=1, col_span=1, title=None, title_size=12.0, title_color='white', title_bold=True, use_this_cam=False, rotate=None, camera_state={}, width_max=None, height_max=None)[source][source]

Add object to subplot.

Parameters:

obj : visbrain.object

The visbrain object to add.

row : int | 0

Row location for the object.

col : int | 0

Columns location for the object.

row_span : int | 1

Number of rows to use.

col_span : int | 1

Number of columns to use.

title : string | None

Subplot title.

title_size : float | 12.

Title font size.

title_color : string/tuple/array_like | ‘white’

Color of the title.

title_bold : bool | True

Use bold title.

use_this_cam : bool | False

If you add multiple objects to the same scene and if you want to use the camera of an object as the reference, turn this parameter to True.

rotate : string | None

Rotate the scene. Use ‘top’, ‘bottom’, ‘left’, ‘right’, ‘front’ or ‘back’. Only available for 3-D objects.

camera_state : dict | {}

Arguments to pass to the camera.

width_max : float | None

Maximum width of the subplot.

height_max : float | None

Maximum height of the subplot.

Link the camera of several objects of the scene.

Parameters:

args : list

List of tuple describing subplot locations. Alternatively, use -1 to link all cameras.

Examples

>>> # Link cameras of subplots (0, 0), (0, 1) and (1, 0)
>>> sc.link((0, 0), (0, 1), (1, 0))

7.2. Brain object

_images/pic_brain_obj.png

Brain object example

class visbrain.objects.BrainObj(name, vertices=None, faces=None, normals=None, lr_index=None, hemisphere='both', translucent=True, sulcus=False, invert_normals=False, transform=None, parent=None, verbose=None, _scale=1.0, **kw)[source][source]

Create a brain object.

Parameters:

name : string

Name of the brain object. If brain is ‘B1’ or ‘B2’ or ‘B3’ use a default brain template. If name is ‘white’, ‘inflated’ or ‘sphere’ download the template (if needed). Otherwise, at least vertices and faces must be defined.

vertices : array_like | None

Mesh vertices to use for the brain. Must be an array of shape (n_vertices, 3).

faces : array_like | None

Mesh faces of shape (n_faces, 3).

normals : array_like | None

Normals to each vertex. If None, the program will try to compute it. Must be an array with the same shape as vertices.

lr_index : array_like | None

Left / Right index for hemispheres. Must be a vector of length n_vertices. This vector must be a boolean array where True refer to vertices that belong to the left hemisphere and False, the right hemisphere.

hemisphere : {‘left’, ‘both’, ‘right’}

The hemisphere to plot.

translucent : bool | True

Use translucent (True) or opaque (False) brain.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Brain object parent.

verbose : string

Verbosity level.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> from visbrain.objects import BrainObj
>>> b = BrainObj('white', hemisphere='right', translucent=False)
>>> b.preview(axis=True)

Methods

set_data([name, vertices, faces, normals, …]) Load a brain template.
rotate([fixed, scale_factor, custom, margin]) Rotate the brain using predefined rotations or a custom one.
project_sources(s_obj[, project, radius, …]) Project source’s activity or repartition onto the brain object.
add_activation([data, vertices, …]) Add activation to the brain template.
get_parcellates(file) Get the list of supported parcellates names and index.
parcellize(file[, select, hemisphere, data, …]) Parcellize the brain surface using a .annot file.
add_activation(data=None, vertices=None, smoothing_steps=20, file=None, hemisphere=None, hide_under=None, n_contours=None, cmap='viridis', clim=None, vmin=None, vmax=None, under='gray', over='red')[source][source]

Add activation to the brain template.

This method can be used for :

  • Add activations to specific vertices (data and vertices)
  • Add an overlay (file input)
Parameters:

data : array_like | None

Vector array of data of shape (n_data,).

vertices : array_like | None

Vector array of vertices of shape (n_vtx). Must be an array of integers.

smoothing_steps : int | 20

Number of smoothing steps (smoothing is used if n_data < n_vtx)

file : string | None

Full path to the overlay file.

hemisphrere : {None, ‘both’, ‘left’, ‘right’}

The hemisphere to use to add the overlay. If None, the method try to inferred the hemisphere from the file name.

hide_under : float | None

Hide activations under a certain threshold.

n_contours : int | None

Display activations as contour.

cmap : string | ‘viridis’

The colormap to use.

clim : tuple | None

The colorbar limits. If None, (data.min(), data.max()) will be used instead.

vmin : float | None

Minimum threshold.

vmax : float | None

Maximum threshold.

under : string/tuple/array_like | ‘gray’

The color to use for values under vmin.

over : string/tuple/array_like | ‘red’

The color to use for values over vmax.

get_parcellates(file)[source][source]

Get the list of supported parcellates names and index.

This method require the pandas and nibabel packages to be installed.

Parameters:

file : string

Path to the .annot file.

parcellize(file, select=None, hemisphere=None, data=None, cmap='viridis', clim=None, vmin=None, under='gray', vmax=None, over='red')[source][source]

Parcellize the brain surface using a .annot file.

This method require the nibabel package to be installed.

Parameters:

file : string

Path to the .annot file.

select : array_like | None

Select the structures to display. Use either a list a index or a list of structure’s names. If None, all structures are displayed.

hemisphere : string | None

The hemisphere for the parcellation. If None, the hemisphere will be inferred from file name.

cmap : string | ‘viridis’

The colormap to use.

clim : tuple | None

The colorbar limits. If None, (data.min(), data.max()) will be used instead.

vmin : float | None

Minimum threshold.

vmax : float | None

Maximum threshold.

under : string/tuple/array_like | ‘gray’

The color to use for values under vmin.

over : string/tuple/array_like | ‘red’

The color to use for values over vmax.

project_sources(s_obj, project='modulation', radius=10.0, contribute=False, cmap='viridis', clim=None, vmin=None, under='black', vmax=None, over='red', mask_color=None)[source][source]

Project source’s activity or repartition onto the brain object.

Parameters:

s_obj : SourceObj

The source object to project.

project : {‘modulation’, ‘repartition’}

Project either the source’s data (‘modulation’) or get the number of contributing sources per vertex (‘repartition’).

radius : float

The radius under which activity is projected on vertices.

contribute: bool | False

Specify if sources contribute on both hemisphere.

cmap : string | ‘viridis’

The colormap to use.

clim : tuple | None

The colorbar limits. If None, (data.min(), data.max()) will be used instead.

vmin : float | None

Minimum threshold.

vmax : float | None

Maximum threshold.

under : string/tuple/array_like | ‘gray’

The color to use for values under vmin.

over : string/tuple/array_like | ‘red’

The color to use for values over vmax.

mask_color : string/tuple/array_like | ‘gray’

The color to use for the projection of masked sources. If None, the color of the masked sources is going to be used.

rotate(fixed=None, scale_factor=None, custom=None, margin=1.08)[source][source]

Rotate the brain using predefined rotations or a custom one.

Parameters:

fixed : str | ‘top’

Use a fixed rotation :

  • Top view : ‘axial_0’, ‘top’
  • Bottom view : ‘axial_1’, ‘bottom’
  • Left : ‘sagittal_0’, ‘left’
  • Right : ‘sagittal_1’, ‘right’
  • Front : ‘coronal_0’, ‘front’
  • Back : ‘coronal_1’, ‘back’

custom : tuple | None

Custom rotation. This parameter must be a tuple of two floats respectively describing the (azimuth, elevation).

set_data(name=None, vertices=None, faces=None, normals=None, lr_index=None, hemisphere='both', invert_normals=False, sulcus=False)[source][source]

Load a brain template.

7.3. Colorbar object

_images/pic_cbar_obj.png

Colorbar object example

class visbrain.objects.ColorbarObj(name, rect=(-0.7, -2, 1.5, 4), transform=None, parent=None, verbose=None, **kwargs)[source][source]

Create a colorbar object.

Parameters:

name : str

Name of the colorbar object. Alternatively, you can pass an other object (like BrainObj or SourceObj) to get their colorbar.

rect : tuple | (-.7, -2, 1.5, 4)

Camera rectangle. The rect input must be a tuple of four floats describing where the camera (start_x, start_y, length_x, length_y).

cmap : string | None

Matplotlib colormap (like ‘viridis’, ‘inferno’…).

clim : tuple/list | None

Colorbar limit. Every values under / over clim will clip.

isvmin : bool | False

Activate/deactivate vmin.

vmin : float | None

Every values under vmin will have the color defined using the under parameter.

vmax : float | None

Every values over vmin will have the color defined using the over parameter.

under : tuple/string | None

Matplotlib color under vmin.

over : tuple/string | None

Matplotlib color over vmax.

cblabel : string | ‘’

Colorbar label.

cbtxtsz : float | 5..

Text size of the colorbar label.

cbtxtsh : float | 2.3

Shift for the colorbar label.

txtcolor : string | ‘white’

Text color.

txtsz : float | 3.

Text size for clim/vmin/vmax text.

txtsh : float | 1.2

Shift for clim/vmin/vmax text.

border : bool | True

Display colorbar borders.

bw : float | 2.

Border width.

limtxt : bool | True

Display vmin/vmax text.

bgcolor : tuple/string | (0., 0., 0.)

Background color of the colorbar canvas.

ndigits : int | 2

Number of digits for the text.

width : float | 0.17

Colorbar width.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Markers object parent.

verbose : string

Verbosity level.

Examples

>>> from visbrain.objects import ColorbarObj
>>> cb = ColorbarObj('cbar', cmap='viridis', clim=(4., 78.2), vmin=10.,
>>>                  vmax=72., cblabel='Colorbar title', under='gray',
>>>                  over='red', txtcolor='black', cbtxtsz=40, cbtxtsh=2.,
>>>                  txtsz=20., width=.04)
>>> cb.preview()

7.4. Source object

_images/pic_source_obj.png

Source object example

class visbrain.objects.SourceObj(name, xyz, data=None, color='red', alpha=1.0, symbol='disc', radius_min=5.0, radius_max=10.0, edge_width=0.0, edge_color='black', system='mni', mask=None, mask_color='gray', text=None, text_size=3.0, text_color='black', text_bold=False, text_translate=(0.0, 2.0, 0.0), visible=True, transform=None, parent=None, verbose=None, _z=-10.0, **kw)[source][source]

Create a source object.

Parameters:

name : string

Name of the source object.

xyz : array_like

Array of positions of shape (n_sources, 2) or (n_sources, 3).

data : array_like | None

Array of weights of shape (n_sources,).

color : array_like/string/tuple | ‘red’

Marker’s color. Use a string (i.e ‘green’) to use the same color across markers or a list of colors of length n_sources to use different colors for markers.

alpha : float | 1.

Transparency level.

symbol : string | ‘disc’

Symbol to use for sources. Allowed style strings are: disc, arrow, ring, clobber, square, diamond, vbar, hbar, cross, tailed_arrow, x, triangle_up, triangle_down, and star.

radius_min / radius_max : float | 5.0/10.0

Define the minimum and maximum source’s possible radius. By default if all sources have the same value, the radius will be radius_min.

edge_color : string/list/array_like | ‘black’

Edge color of source’s markers.

edge_width : float | 0.

Edge width source’s markers.

system : {‘mni’, ‘tal’}

Specify if the coodinates are in the MNI space (‘mni’) or Talairach (‘tal’).

mask : array_like | None

Array of boolean values to specify masked sources. For example, if data are p-values, mask could be non-significant sources.

mask_color : array_like/tuple/string | ‘gray’

Color to use for masked sources.

text : list | None

Text to attach to each source. For example, text could be the name of each source.

text_size : float | 3.

Text size attached to sources.

text_color : array_like/string/tuple | ‘black’

Text color attached to sources.

text_bold : bool | False

Specify if the text attached to sources should be bold.

text_translate : tuple | (0., 2., 0.)

Translate the text along the (x, y, z) axis.

visible : bool/array_like | True

Specify which source’s have to be displayed. If visible is True, all sources are displayed, False all sources are hiden. Alternatively, use an array of shape (n_sources,) to select which sources to display.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Markers object parent.

verbose : string

Verbosity level.

_z : float | 10.

In case of (n_sources, 2) use _z to specify the elevation.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import SourceObj
>>> n_sources = 100
>>> pos = np.random.uniform(-10, 10, (n_sources, 3))
>>> color = ['orange'] * 50 + ['red'] * 50
>>> data = np.random.rand(n_sources)
>>> text = ['s' + str(k) for k in range(n_sources)]
>>> s = SourceObj('test', pos, color=color, data=data, radius_min=10.,
>>>               radius_max=20., edge_color='black', edge_width=1.,
>>>               text=text, text_size=10.)
>>> s.preview(axis=True)

Methods

analyse_sources([roi_obj, replace_bad, …]) Analyse sources using Region of interest (ROI).
color_sources([analysis, color_by, data, …]) Custom color sources methods.
set_visible_sources([select, v, distance]) Select sources that are either inside or outside the mesh.
fit_to_vertices(v) Move sources to the closest vertex.
project_sources(b_obj[, project, radius, …]) Project source’s activity or repartition onto the brain object.
analyse_sources(roi_obj='talairach', replace_bad=True, bad_patterns=[-1, 'undefined', 'None'], distance=None, replace_with='Not found', keep_only=None)[source][source]

Analyse sources using Region of interest (ROI).

This method can be used to identify in which structure is located a source.

Parameters:

roi_obj : string/list | ‘talairach’

The ROI object to use. Use either ‘talairach’, ‘brodmann’ or ‘aal’ to use a predefined ROI template. Otherwise, use a RoiObj object or a list of RoiObj.

replace_bad : bool | True

Replace bad values (True) or not (False).

bad_patterns : list | [-1, ‘undefined’, ‘None’]

Bad patterns to replace if replace_bad is True.

replace_with : string | ‘Not found’

Replace bad patterns with this string.

keep_only : list | None

List of string patterns to keep only sources that match.

Returns:

df : pandas.DataFrames

A Pandas DataFrame or a list of DataFrames if roi_obj is a list.

color_sources(analysis=None, color_by=None, data=None, roi_to_color=None, color_others='black', hide_others=False, cmap='viridis', clim=None, vmin=None, vmax=None, under='gray', over='red')[source][source]

Custom color sources methods.

This method can be used to color sources :

  • According to a data vector. In that case, source’s colors are inferred using colormap inputs (i.e cmap, vmin, vmax, clim, under and over)
  • According to ROI analysis (using the analysis and color_by input parameters)
Parameters:

data : array_like | None

A vector of data with the same length as the number os sources. The color is inferred from this data vector and can be controlled using the cmap, clim, vmin, vmax, under and over parameters.

analysis : pandas.DataFrames | None

ROI analysis runned using the analyse_sources method.

color_by : string | None

A column name of the analysis DataFrames. This columns is then used to identify the color to set to each source inside ROI.

roi_to_color : dict | None

Define custom colors to ROI. For example use {‘BA4’: ‘red’, ‘BA32’: ‘blue’} to define custom colors. If roi_to_color is None, random colors will be used instead.

color_others : array_like/tuple/string | ‘black’

Specify how to color sources that are not found using the roi_to_color dictionary.

hide_others : bool | False

Show or hide sources that are not found using the roi_to_color dictionary.

fit_to_vertices(v)[source][source]

Move sources to the closest vertex.

Parameters:

v : array_like

The vertices of shape (nv, 3) or (nv, 3, 3) if index faced.

project_sources(b_obj, project='modulation', radius=10.0, contribute=False, cmap='viridis', clim=None, vmin=None, under='black', vmax=None, over='red', mask_color=None)[source][source]

Project source’s activity or repartition onto the brain object.

Parameters:

b_obj : {BrainObj, RoiObj}

The object on which to project sources.

project : {‘modulation’, ‘repartition’}

Project either the source’s data (‘modulation’) or get the number of contributing sources per vertex (‘repartition’).

radius : float

The radius under which activity is projected on vertices.

contribute: bool | False

Specify if sources contribute on both hemisphere.

cmap : string | ‘viridis’

The colormap to use.

clim : tuple | None

The colorbar limits. If None, (data.min(), data.max()) will be used instead.

vmin : float | None

Minimum threshold.

vmax : float | None

Maximum threshold.

under : string/tuple/array_like | ‘gray’

The color to use for values under vmin.

over : string/tuple/array_like | ‘red’

The color to use for values over vmax.

mask_color : string/tuple/array_like | ‘gray’

The color to use for the projection of masked sources. If None, the color of the masked sources is going to be used.

set_visible_sources(select='all', v=None, distance=5.0)[source][source]

Select sources that are either inside or outside the mesh.

Parameters:

select : {‘inside’, ‘outside’, ‘close’, ‘all’, ‘none’, ‘left’, ‘right’}

Custom source selection. Use ‘inside’ or ‘outside’ to select sources respectively inside or outside the volume. Use ‘close’ to select sources that are closed to the surface (see the distance parameter below). Finally, use ‘all’ (or True), ‘none’ (or None, False) to show or hide all of the sources.

v : array_like | None

The vertices of shape (nv, 3) or (nv, 3, 3) if index faced.

distance : float | 5.

Distance between the source and the surface.

7.5. Connectivity object

_images/pic_connect_obj.png

Connectivity object example

class visbrain.objects.ConnectObj(name, nodes, edges, select=None, line_width=3.0, color_by='strength', custom_colors=None, alpha=1.0, antialias=False, dynamic=None, cmap='viridis', clim=None, vmin=None, vmax=None, under='gray', over='red', transform=None, parent=None, verbose=None, _z=-10.0, **kw)[source][source]

Create a connectivity object.

Parameters:

name : string

The name of the connectivity object.

nodes : array_like

Array of nodes coordinates of shape (n_nodes, 3).

edges : array_like | None

Array of ponderations for edges of shape (n_nodes, n_nodes).

select : array_like | None

Array to select edges to display. This should be an array of boolean values of shape (n_nodes, n_nodes).

line_width : float | 3.

Connectivity line width.

color_by : {‘strength’, ‘count’}

Coloring method. Use ‘strength’ to color edges according to their connection strength define by the edges input. Use ‘count’ to color edges according to the number of connections per node.

custom_colors : dict | None

Use a dictionary to colorize edges. For example, {1.2: ‘red’, 2.8: ‘green’, None: ‘black’} turn connections that have a 1.2 and 2.8 strength into red and green. All others connections are set to black.

alpha : float | 1.

Transparency level (if dynamic is None).

antialias : bool | False

Use smoothed lines.

dynamic : tuple | None

Control the dynamic opacity. For example, if dynamic=(0, 1), strong connections will be more opaque than weaker connections.

cmap : string | ‘viridis’

Colormap to use if custom_colors is None.

vmin : float | None

Lower threshold of the colormap if custom_colors is None.

under : string | None

Color to use for values under vmin if custom_colors is None.

vmin : float | None

Higher threshold of the colormap if custom_colors is None.

over : string | None

Color to use for values over vmax if custom_colors is None.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Line object parent.

verbose : string

Verbosity level.

_z : float | 10.

In case of (n_sources, 2) use _z to specify the elevation.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import ConnectObj
>>> n_nodes = 100
>>> nodes = np.random.rand(n_nodes, 3)
>>> edges = np.random.uniform(low=-10., high=10., size=(n_nodes, n_nodes))
>>> select = np.logical_and(edges >= 0, edges <= 1.)
>>> c = ConnectObj('Connect', nodes, edges, select=select, cmap='inferno',
>>>                antialias=True)
>>> c.preview(axis=True)

7.5.1. Examples using visbrain.objects.ConnectObj

7.6. Vector object

_images/pic_vector_obj.png

Vector object example

class visbrain.objects.VectorObj(name, arrows, data=None, inferred_data=False, select=None, color='black', dynamic=None, line_width=5.0, arrow_size=10.0, arrow_type='stealth', arrow_coef=1.0, antialias=False, cmap='viridis', clim=None, vmin=None, under='gray', vmax=None, over='red', transform=None, parent=None, verbose=None, _z=-10.0, **kw)[source][source]

Create a vector object.

Parameters:

name : string

Name of the vector object.

arrows : array_like, tuple, list

The position of arrows. Use either :

  • A list (or tuple) of two arrays with identical shapes (N, 3). The first array specify the (x, y, z) position where arrows start and the second the (x, y, z) position of the end of each arrow.
  • Alternatively to the point above, an array of type [(‘start’, float, 3), (‘end’, float, 3)] can also be used.
  • An array of type [(‘vertices’, float, 3), (‘normals’, float, 3)]. This method use the normals to vertices to inferred the arrow locations. In addition, if data is not None, data is used to inferred the arrow length.

data : array_like | None

Attach some data to each vector. This data can be used to inferred the color.

inferred_data : bool | False

If the arrows input use the (start, end) method and if inferred_data is set to True, the magnitude of each vector is used as data.

select : array_like | None

An array of boolean values to select some specifics arrows.

color : array_like/tuple/string | ‘black’

If no data are provided, use this parameter to set a unique color for all vectors.

dynamic : tuple | None

Use a dynamic transparency method. The dynamic input must be a tuple of two float between [0, 1]. Vectors with stronger associated data are going to be set more opaque.

line_width : float | 5.

Line width of each vector.

arrow_size : float | 10.

Size of the arrow-head.

arrow_type : string | ‘stealth’

The arrow-head type. Use either ‘stealth’, ‘curved’, ‘angle_30’, ‘angle_60’, ‘angle_90’, ‘triangle_30’, ‘triangle_60’, ‘triangle_90’ or ‘inhibitor_round’.

arrow_coef : float | 1.

Use this coefficient to define longer arrows. Must be a float superior to 1.

antialias : bool | False

Use smoothed lines.

cmap : string | ‘viridis’

The colormap to use (if data is not None).

clim : tuple | None

Colorbar limits. If None, the (max, min) of data is used (if data is not None).

vmin : float | None

Minimum threshold (if data is not None).

under : string | ‘gray’

Color for values under vmin (if data is not None).

vmax : float | None

Maximum threshold (if data is not None).

over : string | ‘red’

Color for values over vmax (if data is not None).

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Markers object parent.

verbose : string

Verbosity level.

_z : float | 10.

In case of (n_sources, 2) use _z to specify the elevation.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import VectorObj
>>> n_vector = 10
>>> arrows = [np.random.rand(n_vector, 3), np.random.rand(n_vector, 3)]
>>> data = np.random.uniform(-10, 10, (n_vector))
>>> v = VectorObj('Vector', arrows, data=data, antialias=True)
>>> v.preview(axis=True)

7.6.1. Examples using visbrain.objects.VectorObj

7.7. Time-series 3D object

_images/pic_ts_obj.png

3-D time-series object example

class visbrain.objects.TimeSeries3DObj(name, data, xyz, select=None, line_width=1.5, color='white', ts_amp=6.0, ts_width=20.0, alpha=1.0, antialias=False, translate=(0.0, 0.0, 1.0), transform=None, parent=None, verbose=None, _z=-10.0, **kw)[source][source]

Create a 3-D time-series object.

Parameters:

name : string

Name of the time-series object.

data : array_like

Array of time-series of shape (n_sources, n_time_points)

xyz : array_like

The 3-D center location of each time-series of shape (n_sources, 3).

select : array_like | None

Select the time-series to display. Should be a vector of bolean values of shape (n_sources,).

line_width : float | 1.5

Time-series’ line width.

color : array_like/tuple/string | ‘white’

Time-series’ color.

ts_amp : float | 6.

Graphical amplitude of the time-series.

ts_width : float | 20.

Graphical width of the time-series.

alpha : float | 1.

Time-series transparency.

antialias : bool | False

Use smooth lines.

translate : tuple | (0., 0., 1.)

Translate the time-series over the (x, y, z) axes.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Line object parent.

verbose : string

Verbosity level.

_z : float | 10.

In case of (n_sources, 2) use _z to specify the elevation.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import TimeSeries3DObj
>>> n_pts, n_ts = 100, 5
>>> time = np.arange(n_pts)
>>> phy = np.random.uniform(2, 30, (n_ts))
>>> data = np.sin(2 * np.pi * time.reshape(1, -1) * phy.reshape(-1, 1))
>>> xyz = np.random.uniform(-20, 20, (n_ts, 3))
>>> ts = TimeSeries3DObj('Ts', data, xyz, antialias=True, color='red',
>>>                    line_width=3.)
>>> ts.preview(axis=True)

7.7.1. Examples using visbrain.objects.TimeSeries3DObj

7.8. Pictures 3D object

_images/pic_picture_obj.png

3-D pictures object example

class visbrain.objects.Picture3DObj(name, data, xyz, select=None, pic_width=7.0, pic_height=7.0, alpha=1.0, cmap='viridis', clim=None, vmin=None, vmax=None, under='gray', over='red', translate=(0.0, 0.0, 1.0), transform=None, parent=None, verbose=None, _z=-10.0, **kw)[source][source]

Create a 3-D picture object.

Parameters:

name : string

The name of the connectivity object.

data : array_like

Array of data pictures of shape (n_sources, n_rows, n_columns).

xyz : array_like

The 3-d position of each picture of shape (n_sources, 3).

select : array_like | None

Select the pictures to display. Should be a vector of bolean values of shape (n_sources,).

pic_width : float | 7.

Width of each picture.

pic_height : float | 7.

Height of each picture.

alpha : float | 1.

Image transparency.

cmap : string | ‘viridis’

Colormap to use.

vmin : float | None

Lower threshold of the colormap.

under : string | None

Color to use for values under vmin.

vmin : float | None

Higher threshold of the colormap.

over : string | None

Color to use for values over vmax.

translate : tuple | (0., 0., 1.)

Translation over the (x, y, z) axis.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Line object parent.

verbose : string

Verbosity level.

_z : float | 10.

In case of (n_sources, 2) use _z to specify the elevation.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import Picture3DObj
>>> n_rows, n_cols, n_pic = 10, 20, 5
>>> data = np.random.rand(n_pic, n_rows, n_cols)
>>> xyz = np.random.uniform(-10, 10, (n_pic, 3))
>>> pic = Picture3DObj('Pic', data, xyz, cmap='plasma')
>>> pic.preview(axis=True)

7.8.1. Examples using visbrain.objects.Picture3DObj

7.9. Region Of Interest object

_images/pic_roi_obj.png

Region Of Interest object example

class visbrain.objects.RoiObj(name, vol=None, labels=None, index=None, hdr=None, system='mni', transform=None, parent=None, verbose=None, preload=True, _scale=1.0, **kw)[source][source]

Create a Region Of Interest (ROI) object.

Parameters:

name : string

Name of the ROI object. If name is ‘brodmann’, ‘aal’ or ‘talairach’ a predefined ROI object is used and vol, index and label are ignored.

vol : array_like | None

ROI volume. Sould be an array with three dimensions.

labels : array_like | None

Array of labels. A structured array can be used (i.e label=np.zeros(n_sources, dtype=[(‘brodmann’, int), (‘aal’, object)])).

index : array_like | None

Array of index that make the correspondance between the volumne values and labels. The length of index must be the same as label.

hdr : array_like | None

Array of transform source’s coordinates into the volume space. Must be a (4, 4) array.

system : {‘mni’, ‘tal’}

The system of the volumne. Can either be MNI (‘mni’) or Talairach (‘tal’).

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

ROI object parent.

verbose : string

Verbosity level.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import RoiObj
>>> r = RoiObj('brodmann')
>>> r.select_roi(select=[4, 6, 38], unique_color=True, smooth=7)
>>> r.preview(axis=True)

Methods

get_labels([save_to_path]) Get the labels associated with the loaded ROI.
where_is(patterns[, df, union, columns, exact]) Find a list of string patterns in a DataFrame.
select_roi([select, unique_color, …]) Select several Region Of Interest (ROI).
localize_sources(xyz[, source_name, …]) Localize source’s using this ROI object.
project_sources(s_obj[, project, radius, …]) Project source’s activity or repartition onto ROI.
save([tmpfile]) Save the volume template.
remove() Remove the volume template.
get_labels(save_to_path=None)[source][source]

Get the labels associated with the loaded ROI.

Parameters:

save_to_path : str | None

Save labels to an excel file.

localize_sources(xyz, source_name=None, replace_bad=True, bad_patterns=[-1, 'undefined', 'None'], replace_with='Not found', distance=None)[source][source]

Localize source’s using this ROI object.

Parameters:

xyz : array_like

Array of source’s coordinates of shape (n_sources, 3)

source_name : array_like/list | None

List of source’s names.

replace_bad : bool | True

Replace bad values (True) or not (False).

bad_patterns : list | [None, -1, ‘undefined’, ‘None’]

Bad patterns to replace if replace_bad is True.

replace_with : string | ‘Not found’

Replace bad patterns with this string.

remove()[source]

Remove the volume template.

save(tmpfile=False)[source]

Save the volume template.

select_roi(select=0.5, unique_color=False, roi_to_color=None, smooth=3)[source][source]

Select several Region Of Interest (ROI).

Parameters:

select : int, float, list | .5

Threshold for extracting vertices from isosuface method.

unique_color : bool | False

Use a random unique color for each ROI.

roi_to_color : dict | None

Color of specific ROI using a dictionary i.e {1: ‘red’, 2: ‘orange’}.

smooth : int | 3

Smoothing level. Must be an odd integer (smooth % 2 = 1).

where_is(patterns, df=None, union=True, columns=None, exact=False)[source][source]

Find a list of string patterns in a DataFrame.

Parameters:

patterns : list

List of string patterns to search.

df : pd.DataFrame | None

The DataFrame to use. If None, the DataFrame of the ROI are going to be used by default.

union : bool | True

Take either the union of matching patterns (True) or the intersection (False).

columns : list | None

List of specific column names to search in. If None, this method inspect every columns in the DataFrame.

exact : bool | False

Specify if the pattern to search have to be exact matching.

Returns:

idx : list

List of index that match with the list of patterns.

7.10. Volume object

_images/pic_vol_obj.png

Volume object example

class visbrain.objects.VolumeObj(name, vol=None, hdr=None, method='mip', threshold=0.0, cmap='OpaqueGrays', select=None, transform=None, parent=None, preload=True, verbose=None, **kw)[source][source]

Create a 3-D volume object.

Parameters:

name : string

Name of the volume object. If name is ‘brodmann’, ‘aal’ or ‘talairach’ a predefined volume object is used and vol, index and label are ignored. The name input can also be the path to an nii.gz file.

vol : array_like

The volume to use. Should be a 3-d array.

hdr : array_like | None

Matrix transformation to apply. hdr should be a (4, 4) array.

method : {‘mip’, ‘translucent’, ‘additive’, ‘iso’}

Volume rendering method. Default is ‘mip’.

threshold : float | 0.

Threshold value for iso rendering method.

cmap : {‘Opaquegrays’, ‘TransFire’, ‘OpaqueFire’, ‘TransGrays’}

Colormap to use.

select : list | None

Select some structures in the volume.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Volume object parent.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> from visbrain.objects import VolumeObj
>>> select = [4, 6]  # select Brodmann area 4 and 6
>>> v = VolumeObj('brodmann', method='iso', select=select)
>>> v.preview(axis=True)

Methods

__call__(name[, vol, hdr, threshold, cmap, …]) Change the volume.
set_data(vol[, hdr, threshold, cmap, …]) Set data to the volume.
__call__(name, vol=None, hdr=None, threshold=None, cmap=None, method=None, select=None)[source][source]

Change the volume.

set_data(vol, hdr=None, threshold=None, cmap=None, method=None, select=None)[source][source]

Set data to the volume.

7.10.1. Examples using visbrain.objects.VolumeObj

7.11. Cross-section object

_images/pic_cs_obj.png

Cross-section object example

class visbrain.objects.CrossSecObj(name, vol=None, hdr=None, section=(0, 0, 0), interpolation='bilinear', text_size=15.0, text_color='white', text_bold=True, transform=None, parent=None, verbose=None, preload=True, **kw)[source][source]

Create a Cross-sections object.

Parameters:

name : string

Name of the ROI object. If name is ‘brodmann’, ‘aal’ or ‘talairach’ a predefined ROI object is used and vol, index and label are ignored.

vol : array_like | None

The volume to use for the cross-section. Sould be an array with three dimensions.

section : tuple | (0, 0, 0)

The section to take (sagittal, coronal and axial slices).

interpolation : string | ‘nearest’

Interpolation method for the image. See vispy.scene.visuals.Image for availables interpolation methods.

text_size : float | 15.

Text size to use.

text_color : string/tuple | ‘white’

Text color.

text_bold : bool | True

Use bold text.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

ROI object parent.

verbose : string

Verbosity level.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import CrossSecObj
>>> r = CrossSecObj('brodmann', section=(10, -10, 20))
>>> r.preview(axis=True)

Methods

__call__(name[, vol, hdr]) Change the volume object.
set_data([section, clim, cmap, vmin, under, …]) Set data to the cross-section.
localize_source(xyz) Center the cross-sections arround a source location.
__call__(name, vol=None, hdr=None)[source][source]

Change the volume object.

localize_source(xyz)[source][source]

Center the cross-sections arround a source location.

Parameters:

xyz : array_like

The (x, y, z) location of the source. Could be a tuple, list or an array.

set_data(section=(0, 0, 0), clim=None, cmap=None, vmin=None, under=None, vmax=None, over=None, update=False)[source][source]

Set data to the cross-section.

Parameters:

section : tuple | (0, 0, 0)

The section to take (sagittal, coronal and axial slices).

7.11.1. Examples using visbrain.objects.CrossSecObj

7.12. Image object

_images/pic_image_obj.png

Image object example

class visbrain.objects.ImageObj(name, data=None, xaxis=None, yaxis=None, cmap='viridis', clim=None, vmin=None, under='gray', vmax=None, over='red', interpolation='nearest', max_pts=-1, parent=None, transform=None, verbose=None, **kw)[source][source]

Create a single image object.

Parameters:

data : array_like

Array of data. If data.ndim in [1, 2] the color is inferred from the data. Otherwise, if data.ndim is 3, data is interpreted as color if the last dimension is either 3 (RGB) or 4 (RGBA).

xaxis : array_like | None

Vector to use for the x-axis (number of columns in the image). If None, xaxis is inferred from the second dimension of data.

yaxis : array_like | None

Vector to use for the y-axis (number of rows in the image). If None, yaxis is inferred from the first dimension of data.

clim : tuple | None

Colorbar limits. If None, clim=(data.min(), data.max())

cmap : string | None

Colormap name.

vmin : float | None

Minimum threshold of the colorbar.

under : string/tuple/array_like | None

Color for values under vmin.

vmax : float | None

Maximum threshold of the colorbar.

under : string/tuple/array_like | None

Color for values over vmax.

interpolation : string | ‘nearest’

Interpolation method for the image. See vispy.scene.visuals.Image for availables interpolation methods.

max_pts : int | -1

Maximum number of points of the image along the x or y axis. This parameter is essentially used to solve OpenGL issues with very large images.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Markers object parent.

verbose : string

Verbosity level.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import ImageObj
>>> n = 100
>>> time = np.r_[np.arange(n - 1), np.arange(n)[::-1]]
>>> time = time.reshape(-1, 1) + time.reshape(1, -1)
>>> im = ImageObj('im', time, cmap='Spectral_r', interpolation='bicubic')
>>> im.preview(axis=True)

Methods

set_data(data[, xaxis, yaxis, clim, cmap, …]) Set data to the image.
set_data(data, xaxis=None, yaxis=None, clim=None, cmap=None, vmin=None, under=None, vmax=None, over=None)[source][source]

Set data to the image.

7.12.1. Examples using visbrain.objects.ImageObj

7.13. Time-frequency map object

_images/pic_tf_obj.png

Time-frequency map object example

class visbrain.objects.TimeFrequencyObj(name, data=None, sf=1.0, method='fourier', nperseg=256, f_min=1.0, f_max=160.0, f_step=1.0, baseline=None, norm=None, n_window=None, overlap=0.0, window=None, c_parameter=20, cmap='viridis', clim=None, vmin=None, under='gray', vmax=None, over='red', interpolation='nearest', max_pts=-1, parent=None, transform=None, verbose=None, **kw)[source][source]

Compute the time-frequency map (or spectrogram).

The time-frequency decomposition can be assessed using :

  • The fourier transform
  • Morlet’s wavelet
  • Multi-taper
Parameters:

name : string | None

Name of the time-frequency object.

data : array_like

Array of data of shape (N,)

sf : float | 1.

The sampling frequency.

method : {‘fourier’, ‘wavelet’, ‘multitaper’}

The method to use to compute the time-frequency decomposition.

nperseg : int | 256

Length of each segment. Argument pass to the scipy.signal.spectrogram function (for ‘fourier’ and ‘multitaper’ method).

overlap : float | 0.

Overlap between segments. Must be between 0. and 1.

f_min : float | 1.

Minimum frequency (for ‘wavelet’ method).

f_max : float | 160.

Maximum frequency (for ‘wavelet’ method).

f_step : float | 2.

Frequency step between two consecutive frequencies (for ‘wavelet’ method).

baseline : array_like | None

Baseline period (for ‘wavelet’ method).

norm : int | None

The normalization type (for ‘wavelet’ method).. See the normalization function.

n_window : int | None

If this parameter is an integer, the time-frequency map is going to be averaged into smaller windows (for ‘wavelet’ method).

window : {‘flat’, ‘hanning’, ‘hamming’, ‘bartlett’, ‘blackman’}

Windowing method for averaging. By default, ‘flat’ is used for Wavelet and ‘hamming’ for Fourier.

c_parameter : int | 20

Parameter ‘c’ described in doi:10.1155/2011/980805 (for ‘multitaper’ method)

clim : tuple | None

Colorbar limits. If None, clim=(data.min(), data.max())

cmap : string | None

Colormap name.

vmin : float | None

Minimum threshold of the colorbar.

under : string/tuple/array_like | None

Color for values under vmin.

vmax : float | None

Maximum threshold of the colorbar.

over : string/tuple/array_like | None

Color for values over vmax.

interpolation : string | ‘nearest’

Interpolation method for the image. See vispy.scene.visuals.Image for availables interpolation methods.

max_pts : int | -1

Maximum number of points of the image along the x or y axis. This parameter is essentially used to solve OpenGL issues with very large images.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Markers object parent.

verbose : string

Verbosity level.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import TimeFrequencyObj
>>> n, sf = 512, 256  # number of time-points and sampling frequency
>>> time = np.arange(n) / sf  # time vector
>>> data = np.sin(2 * np.pi * 25. * time) + np.random.rand(n)
>>> tf = TimeFrequencyObj('tf', data, sf)
>>> tf.preview(axis=True)

Methods

set_data(data[, sf, method, nperseg, f_min, …]) Compute TF and set data to the ImageObj.
set_data(data, sf=1.0, method='fourier', nperseg=256, f_min=1.0, f_max=160.0, f_step=1.0, baseline=None, norm=None, n_window=None, overlap=0.0, window=None, c_parameter=20, clim=None, cmap='viridis', vmin=None, under=None, vmax=None, over=None)[source][source]

Compute TF and set data to the ImageObj.

7.13.1. Examples using visbrain.objects.TimeFrequencyObj

7.14. Hypnogram object

_images/pic_hypno_obj.png

Hypnogram object example

class visbrain.objects.HypnogramObj(name, data=None, time=None, art=-1, wake=0, n1=1, n2=2, n3=3, rem=4, art_visual=1, wake_visual=0, rem_visual=-1, n1_visual=-2, n2_visual=-3, n3_visual=-4, art_color='#8bbf56', wake_color='#56bf8b', rem_color='#bf5656', n1_color='#aabcce', n2_color='#405c79', n3_color='#0b1c2c', line_width=2.0, antialias=False, unicolor=False, transform=None, parent=None, verbose=None, **kw)[source][source]

Hypnogram object.

Parameters:

name : string

Name of the hypnogram object or path to a *.txt or *.csv file.

data : array_like

Array of data of shape (n_pts,).

time : array_like | None

Array of time points of shape (n_pts,)

art, wake, rem, n1, n2, n3 :

Stage identification inside the data array.

art_visual, wake_visual, rem_visual, n1_visual, n2_visual, n3_visual :

Stage order when plotting.

art_color, wake_color, rem_color, n1_color, n2_color, n3_color :

Stage color.

line_width : float | 2.

Line with of the hypnogram.

antialias : bool | False

Use anti-aliasing line.

unicolor : bool | False

Use a uni black color for the hypnogram.

transform : VisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parent : VisPy.parent | None

Hypnogram object parent.

verbose : string

Verbosity level.

kw : dict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Examples

>>> import numpy as np
>>> from visbrain.objects import HypnogramObj
>>> data = np.repeat(np.arange(6), 100) - 1.
>>> h_obj = HypnogramObj('hypno', data)
>>> h_obj.preview(axis=True)

Methods

set_stage(stage, idx_start, idx_end) Set stage.
set_stage(stage, idx_start, idx_end)[source][source]

Set stage.

Parameters:

stage : str, int

Stage to define. Should either be a string (e.g ‘art’, ‘rem’…) or an integer.

idx_start : int

Index where the stage begin.

idx_end : int

Index where the stage finish.