Model Components

Activation Functions

class myoverse.models.components.activation_functions.PSerf(gamma=1.0, sigma=1.25, stabilisation_term=1e-12)[source]

PSerf activation function from Biswas et al.

Parameters:
  • gamma (float, optional) – The gamma parameter, by default 1.0.

  • sigma (float, optional) – The sigma parameter, by default 1.25.

  • stabilisation_term (float, optional) – The stabilisation term, by default 1e-12.

References

Biswas, K., Kumar, S., Banerjee, S., Pandey, A.K., 2021. ErfAct and PSerf: Non-monotonic smooth trainable Activation Functions. arXiv:2109.04386 [cs].

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tensor

class myoverse.models.components.activation_functions.SAU(alpha=0.15, n=20000)[source]

SAU activation function from Biswas et al.

Parameters:
  • alpha (float, optional) – The alpha parameter, by default 0.15.

  • n (int, optional) – The n parameter, by default 20000.

References

Biswas, K., Kumar, S., Banerjee, S., Pandey, A.K., 2021. SAU: Smooth activation function using convolution with approximate identities. arXiv:2109.13210 [cs].

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tensor

class myoverse.models.components.activation_functions.SMU(alpha=0.01, mu=2.5)[source]

SMU activation function from Biswas et al.

Parameters:
  • alpha (float, optional) – The alpha parameter, by default 0.01.

  • mu (float, optional) – The mu parameter, by default 2.5.

References

Biswas, K., Kumar, S., Banerjee, S., Pandey, A.K., 2022. SMU: smooth activation function for deep networks using smoothing maximum technique. arXiv:2111.04682 [cs].

Notes

This version also make alpha trainable.

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tensor

class myoverse.models.components.activation_functions.SMU_old(alpha=0.01, mu=2.5)[source]

SMU activation function from Biswas et al. This is an older version of the SMU activation function and should not be used.

Warning

This is an older version of the SMU activation function and should not be used.

Parameters:
  • alpha (float, optional) – The alpha parameter, by default 0.01.

  • mu (float, optional)

  • parameter (The mu)

  • 2.5. (by default)

References

Biswas, K., Kumar, S., Banerjee, S., Pandey, A.K., 2022. SMU: smooth activation function for deep networks using smoothing maximum technique. arXiv:2111.04682 [cs].

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tensor

Utilities

class myoverse.models.components.utils.CircularPad[source]

Circular padding layer used in the paper [1].

Parameters:

x (torch.Tensor) – Input tensor.

Returns:

Padded tensor.

Return type:

torch.Tensor

References

[1] Sîmpetru, R.C., Osswald, M., Braun, D.I., Oliveira, D.S., Cakici, A.L., Del Vecchio, A., 2022. Accurate Continuous Prediction of 14 Degrees of Freedom of the Hand from Myoelectrical Signals through Convolutive Deep Learning, in: 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 702–706. https://doi.org/10.1109/EMBC48229.2022.9870937

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tensor

class myoverse.models.components.utils.WeightedSum(alpha=0.5)[source]

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, y)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Return type:

Tensor