torchcvnn.nn

Convolution layers

ConvTranspose2d(in_channels, out_channels, ...)

Implementation of torch.nn.Conv2dTranspose for complex numbers.

Pooling layers

MaxPool2d(kernel_size[, stride, padding, ...])

Applies a 2D max pooling on the module of the input signal

AvgPool2d(kernel_size[, stride, padding, ...])

Implementation of torch.nn.AvgPool2d for complex numbers.

UpSampling layers

Upsample([size, scale_factor, mode, ...])

Works by applying independently the same upsampling to both the real and imaginary parts.

Activations

Type A

These activation functions apply the same function independently to both real and imaginary components.

CCELU()

Applies a CELU independently on both the real and imaginary parts

CELU()

Applies a ELU independently on both the real and imaginary parts

CGELU()

Applies a GELU independently on both the real and imaginary parts

CReLU()

Applies a ReLU independently on both the real and imaginary parts

CPReLU()

Applies a PReLU independently on both the real and imaginary parts

CSigmoid()

Applies a Sigmoid independently on both the real and imaginary parts

CTanh()

Applies a Tanh independently on both the real and imaginary parts

Type B

The Type B activation functions take into account both the magnitude and phase of the input.

Cardioid()

The cardioid activation function as proposed by Virtue et al. (2019) is given by :.

Mod()

Extracts the magnitude of the complex input.

modReLU()

Applies a ReLU with parametric offset on the amplitude, keeping the phase unchanged.

zAbsReLU()

Applies a zAbsReLU

zLeakyReLU()

Applies a zLeakyReLU

zReLU()

Applies a zReLU

Attention

MultiheadAttention(embed_dim, num_heads[, ...])

This class is adapted from torch.nn.MultiheadAttention to support complex valued tensors.

Normalization layers

BatchNorm2d(num_features[, eps, momentum, ...])

BatchNorm for complex valued neural networks.

BatchNorm1d(num_features[, eps, momentum, ...])

BatchNorm for complex valued neural networks.

LayerNorm(normalized_shape[, eps, ...])

Implementation of the torch.nn.LayerNorm for complex numbers.

RMSNorm(normalized_shape[, eps, ...])

Implementation of the torch.nn.RMSNorm for complex numbers.

Transformer layers

Transformer(d_model, nhead, ...[, device])

A transformer model.

TransformerEncoderLayer(d_model, nhead, ...)

TransformerEncoderLayer is made up of self-attn and feedforward network.

TransformerDecoderLayer(d_model, nhead, ...)

TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.

Vision Transformer

ViTLayer(num_heads, hidden_dim, mlp_dim, ...)

ViT(patch_embedder, num_layers, num_heads, ...)

Dropout layers

Dropout([p])

Applies dropout to zero out values of the inputs

Dropout2d([p])

Applies dropout to zero out complete channels