torchcvnn.nn.ViTLayer¶
- class torchcvnn.nn.ViTLayer(num_heads: int, hidden_dim: int, mlp_dim: int, dropout: float = 0.0, attention_dropout: float = 0.0, norm_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] = <class 'torchcvnn.nn.modules.normalization.LayerNorm'>, device: ~torch.device | None = None, dtype: ~torch.dtype = torch.complex64)[source]¶
- __init__(num_heads: int, hidden_dim: int, mlp_dim: int, dropout: float = 0.0, attention_dropout: float = 0.0, norm_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] = <class 'torchcvnn.nn.modules.normalization.LayerNorm'>, device: ~torch.device | None = None, dtype: ~torch.dtype = torch.complex64) None[source]¶
The ViT layer cascades a multi-head attention block with a feed-forward network.
- Parameters:
num_heads – Number of heads in the multi-head attention block.
hidden_dim – Hidden dimension of the transformer.
mlp_dim – Hidden dimension of the feed-forward network.
dropout – Dropout rate (default: 0.0).
attention_dropout – Dropout rate in the attention block (default: 0.0).
norm_layer – Normalization layer (default
LayerNorm).
\[\begin{split}x & = x + \text{attn}(\text{norm1}(x))\\ x & = x + \text{ffn}(\text{norm2}(x))\end{split}\]The FFN block is a two-layer MLP with a modReLU activation function.
Methods
__init__(num_heads, hidden_dim, mlp_dim[, ...])The ViT layer cascades a multi-head attention block with a feed-forward network.
add_module(name, module)Add a child module to the current module.
apply(fn)Apply
fnrecursively to every submodule (as returned by.children()) as well as self.bfloat16()Casts all floating point parameters and buffers to
bfloat16datatype.buffers([recurse])Return an iterator over module buffers.
children()Return an iterator over immediate children modules.
compile(*args, **kwargs)Compile this Module's forward using
torch.compile().cpu()Move all model parameters and buffers to the CPU.
cuda([device])Move all model parameters and buffers to the GPU.
double()Casts all floating point parameters and buffers to
doubledatatype.eval()Set the module in evaluation mode.
extra_repr()Return the extra representation of the module.
float()Casts all floating point parameters and buffers to
floatdatatype.forward(x)Performs the forward pass through the layer using pre-normalization.
get_buffer(target)Return the buffer given by
targetif it exists, otherwise throw an error.get_extra_state()Return any extra state to include in the module's state_dict.
get_parameter(target)Return the parameter given by
targetif it exists, otherwise throw an error.get_submodule(target)Return the submodule given by
targetif it exists, otherwise throw an error.half()Casts all floating point parameters and buffers to
halfdatatype.ipu([device])Move all model parameters and buffers to the IPU.
load_state_dict(state_dict[, strict, assign])Copy parameters and buffers from
state_dictinto this module and its descendants.modules()Return an iterator over all modules in the network.
mtia([device])Move all model parameters and buffers to the MTIA.
named_buffers([prefix, recurse, ...])Return an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children()Return an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules([memo, prefix, remove_duplicate])Return an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters([prefix, recurse, ...])Return an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters([recurse])Return an iterator over module parameters.
register_backward_hook(hook)Register a backward hook on the module.
register_buffer(name, tensor[, persistent])Add a buffer to the module.
register_forward_hook(hook, *[, prepend, ...])Register a forward hook on the module.
register_forward_pre_hook(hook, *[, ...])Register a forward pre-hook on the module.
register_full_backward_hook(hook[, prepend])Register a backward hook on the module.
register_full_backward_pre_hook(hook[, prepend])Register a backward pre-hook on the module.
register_load_state_dict_post_hook(hook)Register a post-hook to be run after module's
load_state_dict()is called.register_load_state_dict_pre_hook(hook)Register a pre-hook to be run before module's
load_state_dict()is called.register_module(name, module)Alias for
add_module().register_parameter(name, param)Add a parameter to the module.
register_state_dict_post_hook(hook)Register a post-hook for the
state_dict()method.register_state_dict_pre_hook(hook)Register a pre-hook for the
state_dict()method.requires_grad_([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state(state)Set extra state contained in the loaded state_dict.
set_submodule(target, module[, strict])Set the submodule given by
targetif it exists, otherwise throw an error.share_memory()state_dict(*args[, destination, prefix, ...])Return a dictionary containing references to the whole state of the module.
to(*args, **kwargs)Move and/or cast the parameters and buffers.
to_empty(*, device[, recurse])Move the parameters and buffers to the specified device without copying storage.
train([mode])Set the module in training mode.
type(dst_type)Casts all parameters and buffers to
dst_type.xpu([device])Move all model parameters and buffers to the XPU.
zero_grad([set_to_none])Reset gradients of all model parameters.
Attributes
T_destinationcall_super_initdump_patchestraining