torchcvnn.nn¶
Convolution layers¶
|
Implementation of torch.nn.Conv2dTranspose for complex numbers. |
Pooling layers¶
UpSampling layers¶
|
Works by applying independently the same upsampling to both the real and imaginary parts. |
Activations¶
Type A¶
These activation functions apply the same function independently to both real and imaginary components.
|
Applies a CELU independently on both the real and imaginary parts |
|
Applies a ELU independently on both the real and imaginary parts |
|
Applies a GELU independently on both the real and imaginary parts |
|
Applies a ReLU independently on both the real and imaginary parts |
|
Applies a PReLU independently on both the real and imaginary parts |
|
Applies a Sigmoid independently on both the real and imaginary parts |
|
Applies a Tanh independently on both the real and imaginary parts |
Type B¶
The Type B activation functions take into account both the magnitude and phase of the input.
|
The cardioid activation function as proposed by Virtue et al. (2019) is given by :. |
|
Extracts the magnitude of the complex input. |
|
Applies a ReLU with parametric offset on the amplitude, keeping the phase unchanged. |
|
Applies a zAbsReLU |
Applies a zLeakyReLU |
|
|
Applies a zReLU |
Attention¶
|
This class is adapted from torch.nn.MultiheadAttention to support complex valued tensors. |
Normalization layers¶
|
BatchNorm for complex valued neural networks. |
|
BatchNorm for complex valued neural networks. |
|
Implementation of the torch.nn.LayerNorm for complex numbers. |
|
Implementation of the torch.nn.RMSNorm for complex numbers. |
Transformer layers¶
|
A transformer model. |
|
TransformerEncoderLayer is made up of self-attn and feedforward network. |
|
TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. |