V15

Model definition not used in any publication

class doc_octopy.models.definitions.raul_net.online.v15.BaseVariationalLayer_[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

kl_div(mu_q, sigma_q, mu_p, sigma_p)[source]

Calculates kl divergence between two gaussians (Q || P)

Parameters:
  • mu_q (*) – torch.Tensor -> mu parameter of distribution Q

  • sigma_q (*) – torch.Tensor -> sigma parameter of distribution Q

  • mu_p (*) – float -> mu parameter of distribution P

  • sigma_p (*) – float -> sigma parameter of distribution P

returns torch.Tensor of shape 0

class doc_octopy.models.definitions.raul_net.online.v15.Conv3dFlipout(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, prior_mean=0, prior_variance=1, posterior_mu_init=0, posterior_rho_init=-3.0)[source]

Implements Conv3d layer with Flipout reparameterization trick.

Inherits from bayesian_torch.layers.BaseVariationalLayer_

Parameters:
  • in_channels – int -> number of channels in the input image,

  • out_channels – int -> number of channels produced by the convolution,

  • kernel_size – int -> size of the convolving filter,

  • stride – int -> stride of the convolution. Default: 1,

  • padding – int -> zero-padding added to both sides of the input. Default: 0,

  • dilation – int -> spacing between filter elements. Default: 1,

  • groups – int -> number of blocked connections from input channels to output channels,

  • prior_mean – float -> mean of the prior arbitrary distribution to be used on the complexity cost,

  • prior_variance – float -> variance of the prior arbitrary distribution to be used on the complexity cost,

  • posterior_mu_init – float -> init trainable mu parameter representing mean of the approximate posterior,

  • posterior_rho_init – float -> init trainable rho parameter representing the sigma of the approximate posterior through softplus function,

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

init_parameters()[source]
kl_loss()[source]
class doc_octopy.models.definitions.raul_net.online.v15.RaulNetV15(learning_rate, nr_of_input_channels, input_length__samples, nr_of_outputs, cnn_encoder_channels, mlp_encoder_channels, event_search_kernel_length, event_search_kernel_stride, nr_of_electrode_grids=5, nr_of_electrodes_per_grid=64, inference_only=False)[source]

Model definition not used in any publication

Parameters:
  • learning_rate (float)

  • nr_of_input_channels (int)

  • input_length__samples (int)

  • nr_of_outputs (int)

  • cnn_encoder_channels (Tuple[int, int, int])

  • mlp_encoder_channels (Tuple[int, int])

  • event_search_kernel_length (int)

  • event_search_kernel_stride (int)

  • nr_of_electrode_grids (int)

  • nr_of_electrodes_per_grid (int)

  • inference_only (bool)

learning_rate

The learning rate.

Type:

float

nr_of_input_channels

The number of input channels.

Type:

int

nr_of_outputs

The number of outputs.

Type:

int

cnn_encoder_channels

Tuple containing 3 integers defining the cnn encoder channels.

Type:

Tuple[int, int, int]

mlp_encoder_channels

Tuple containing 2 integers defining the mlp encoder channels.

Type:

Tuple[int, int]

event_search_kernel_length

Integer that sets the length of the kernels searching for action potentials.

Type:

int

event_search_kernel_stride

Integer that sets the stride of the kernels searching for action potentials.

Type:

int

forward(inputs)[source]

Same as torch.nn.Module.forward.

Parameters:
  • *args – Whatever you decide to pass into the forward method.

  • **kwargs – Keyword arguments are also possible.

Returns:

Your model’s output

Return type:

tuple[Tensor, Tensor] | Tensor