visbrain.objects.TimeFrequencyObj

class visbrain.objects.TimeFrequencyObj(name, data=None, sf=1.0, method='fourier', nperseg=256, f_min=1.0, f_max=160.0, f_step=1.0, baseline=None, norm=None, n_window=None, overlap=0.0, window=None, c_parameter=20, cmap='viridis', clim=None, vmin=None, under='gray', vmax=None, over='red', interpolation='nearest', max_pts=-1, parent=None, transform=None, verbose=None, **kw)[source][source]

Compute the time-frequency map (or spectrogram).

The time-frequency decomposition can be assessed using :

  • The fourier transform

  • Morlet’s wavelet

  • Multi-taper

Parameters
namestring | None

Name of the time-frequency object.

dataarray_like

Array of data of shape (N,)

sffloat | 1.

The sampling frequency.

method{‘fourier’, ‘wavelet’, ‘multitaper’}

The method to use to compute the time-frequency decomposition.

npersegint | 256

Length of each segment. Argument pass to the scipy.signal.spectrogram function (for ‘fourier’ and ‘multitaper’ method).

overlapfloat | 0.

Overlap between segments. Must be between 0. and 1.

f_minfloat | 1.

Minimum frequency (for ‘wavelet’ method).

f_maxfloat | 160.

Maximum frequency (for ‘wavelet’ method).

f_stepfloat | 2.

Frequency step between two consecutive frequencies (for ‘wavelet’ method).

baselinearray_like | None

Baseline period (for ‘wavelet’ method).

normint | None

The normalization type (for ‘wavelet’ method).. See the normalization function.

n_windowint | None

If this parameter is an integer, the time-frequency map is going to be averaged into smaller windows (for ‘wavelet’ method).

window{‘flat’, ‘hanning’, ‘hamming’, ‘bartlett’, ‘blackman’}

Windowing method for averaging. By default, ‘flat’ is used for Wavelet and ‘hamming’ for Fourier.

c_parameterint | 20

Parameter ‘c’ described in doi:10.1155/2011/980805 (for ‘multitaper’ method)

climtuple | None

Colorbar limits. If None, clim=(data.min(), data.max())

cmapstring | None

Colormap name.

vminfloat | None

Minimum threshold of the colorbar.

understring/tuple/array_like | None

Color for values under vmin.

vmaxfloat | None

Maximum threshold of the colorbar.

overstring/tuple/array_like | None

Color for values over vmax.

interpolationstring | ‘nearest’

Interpolation method for the image. See vispy.scene.visuals.Image for availables interpolation methods.

max_ptsint | -1

Maximum number of points of the image along the x or y axis. This parameter is essentially used to solve OpenGL issues with very large images.

transformVisPy.visuals.transforms | None

VisPy transformation to set to the parent node.

parentVisPy.parent | None

Markers object parent.

verbosestring

Verbosity level.

kwdict | {}

Optional arguments are used to control the colorbar (See ColorbarObj).

Notes

List of supported shortcuts :

  • s : save the figure

  • <delete> : reset camera

Examples

>>> import numpy as np
>>> from visbrain.objects import TimeFrequencyObj
>>> n, sf = 512, 256  # number of time-points and sampling frequency
>>> time = np.arange(n) / sf  # time vector
>>> data = np.sin(2 * np.pi * 25. * time) + np.random.rand(n)
>>> tf = TimeFrequencyObj('tf', data, sf)
>>> tf.preview(axis=True)

Methods

__init__(name[, data, sf, method, nperseg, …])

Init.

animate([step, interval, iterations])

Animate the object.

copy()

Get a copy of the object.

describe_tree()

Tree description.

preview([bgcolor, axis, xyz, show, obj, …])

Previsualize the result.

record_animation(name[, n_pic, bgcolor])

Record an animated object and save as a *.gif file.

render()

Render the canvas.

screenshot(saveas[, print_size, dpi, unit, …])

Take a screeshot of the scene.

set_data(data[, sf, method, nperseg, f_min, …])

Compute TF and set data to the ImageObj.

set_shortcuts_to_canvas(canvas)

Set shortcuts to a VisbrainCanvas.

to_dict()

Return a dictionary of all colorbar args.

to_kwargs([addisminmax])

Return a dictionary for input arguments.

update()

Fonction to run when an update is needed.

update_from_dict(kwargs)

Update attributes from a dictionary.

animate(step=1.0, interval='auto', iterations=-1)[source]

Animate the object.

Note that this method can only be used with 3D objects.

Parameters
stepfloat | 1.

Rotation step.

intervalfloat | ‘auto’

Time between events in seconds. The default is ‘auto’, which attempts to find the interval that matches the refresh rate of the current monitor. Currently this is simply 1/60.

iterationsint | -1

Number of iterations. Can be -1 for infinite.

clim

Get the clim value.

cmap

Get the cmap value.

copy()[source]

Get a copy of the object.

data_folder

Get the data_folder value.

interpolation

Get the interpolation value.

name

Get the name value.

over

Get the over value.

parent

Get the parent value.

preview(bgcolor='black', axis=False, xyz=False, show=True, obj=None, size=(1200, 800), mpl=False, **kwargs)[source]

Previsualize the result.

Parameters
bgcolorarray_like/string/tuple | ‘black’

Background color for the preview.

axisbool | False

Add x and y axis with ticks.

xyzbool | False

Add an (x, y, z) axis to the scene.

objVisbrainObj | None

Pass a Visbrain object if you want to use the camera of an other object.

sizetuple | (1200, 800)

Default size of the window.

mplbool | False

Use Matplotlib to display the object. This result in a non interactive figure.

kwargsdict | {}

Optional arguments are passed to the VisbrainCanvas class.

record_animation(name, n_pic=10, bgcolor=None)[source]

Record an animated object and save as a *.gif file.

Note that this method :

  • Can only be used with 3D objects.

  • Requires the python package imageio

Parameters
namestring

Name of the gif file (e.g ‘myfile.gif’)

n_picint | 10

Number of pictures to use to render the gif.

bgcolorstring, tuple, list | None

Background color.

render()[source]

Render the canvas.

Returns
imgarray_like

Array of shape (n_rows, n_columns, 4) where 4 describes the RGBA components.

screenshot(saveas, print_size=None, dpi=300.0, unit='centimeter', factor=None, region=None, autocrop=False, bgcolor=None, transparent=False, obj=None, line_width=1.0, **kwargs)[source]

Take a screeshot of the scene.

By default, the rendered canvas will have the size of your screen. The screenshot() method provides two ways to increase to exported image resolution :

  • Using print_size, unit and dpi inputs : specify the size of the image at a specific dpi level. For example, you might want to have an (10cm, 15cm) image at 300 dpi.

  • Using the factor input : multiply the default image size by this factor. For example, if you have a (1920, 1080) monitor and if factor is 2, the exported image should have a shape of (3840, 2160) pixels.

Parameters
saveasstr

The name of the file to be saved. This file must contains a extension like .png, .tiff, .jpg…

print_sizetuple | None

The desired print size. This argument should be used in association with the dpi and unit inputs. print_size describe should be a tuple of two floats describing (width, height) of the exported image for a specific dpi level. The final image might not have the exact desired size but will try instead to find a compromize regarding to the proportion of width/height of the original image.

dpifloat | 300.

Dots per inch for printing the image.

unit{‘centimeter’, ‘millimeter’, ‘pixel’, ‘inch’}

Unit of the printed size.

factorfloat | None

If you don’t want to use the print_size input, factor simply multiply the resolution of your screen.

regiontuple | None

Select a specific region. Must be a tuple of four integers each one describing (x_start, y_start, width, height).

autocropbool | False

Automaticaly crop the figure in order to have the smallest space between the brain and the border of the picture.

bgcolorarray_like/string | None

The background color of the image.

transparentbool | False

Specify if the exported figure have to contains a transparent background.

objVisbrainObj | None

Pass a Visbrain object if you want to use the camera of an other object for the sceen rendering.

kwargsdict | {}

Optional arguments are passed to the VisbrainCanvas class.

set_data(data, sf=1.0, method='fourier', nperseg=256, f_min=1.0, f_max=160.0, f_step=1.0, baseline=None, norm=None, n_window=None, overlap=0.0, window=None, c_parameter=20, clim=None, cmap='viridis', vmin=None, under=None, vmax=None, over=None)[source][source]

Compute TF and set data to the ImageObj.

transform

Get the transform value.

under

Get the under value.

visible_obj

Get the visible_obj value.

vmax

Get the vmax value.

vmin

Get the vmin value.

Examples using visbrain.objects.TimeFrequencyObj