lfxai.models.images module
- class AutoEncoderMnist(encoder: EncoderMnist, decoder: DecoderMnist, latent_dim: int, input_pert: callable, name: str = 'model', loss_f: callable = MSELoss())
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- fit(device: device, train_loader: DataLoader, test_loader: DataLoader, save_dir: Path, n_epoch: int = 30, patience: int = 10, checkpoint_interval: int = -1) None
- forward(x)
Forward pass of model.
Parameters:
- xtorch.Tensor
Batch of data. Shape (batch_size, n_chan, height, width)
- load_metadata(directory: Path) dict
Load the metadata of a training directory.
Parameters:
- directorypathlib.Path
Path to folder where model is saved. For example ‘./experiments/mnist’.
- save(directory: Path) None
Save a model and corresponding metadata.
Parameters:
- directorypathlib.Path
Path to the directory where to save the data.
- save_metadata(directory: Path, **kwargs) None
Load the metadata of a training directory.
Parameters:
- directory: string
Path to folder where to save model. For example ‘./experiments/mnist’.
- kwargs:
Additional arguments to json.dump
- test_epoch(device: device, dataloader: DataLoader)
- train_epoch(device: device, dataloader: DataLoader, optimizer: Optimizer) ndarray
- training: bool
- class BetaTcVaeMnist(latent_dims: int = 10, beta: int = 1)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- fit(device: device, train_loader: DataLoader, test_loader: DataLoader, n_epoch: int = 30) None
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- latent_sample(mu, logvar)
- loss(recon_x: Tensor, x: Tensor, mu: Tensor, logvar: Tensor, z: Tensor, dataset_size: int) Tensor
- test_epoch(device: device, dataloader: DataLoader)
- train_epoch(device: device, dataloader: DataLoader, optimizer: Optimizer) ndarray
- training: bool
- class BetaVaeMnist(latent_dims: int = 10, beta: int = 1)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- fit(device: device, train_loader: DataLoader, test_loader: DataLoader, n_epoch: int = 30) None
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- latent_sample(mu, logvar)
- loss(recon_x: Tensor, x: Tensor, mu: Tensor, logvar: Tensor, dataset_size: int) Tensor
- test_epoch(device: device, dataloader: DataLoader) ndarray
- train_epoch(device: device, dataloader: DataLoader, optimizer: Optimizer) ndarray
- training: bool
- class ClassifierLatent(latent_dims: int)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class ClassifierMnist(encoder: EncoderMnist, latent_dim: int, name: str)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- fit(device: device, train_loader: DataLoader, test_loader: DataLoader, save_dir: Path, n_epoch: int = 30, patience: int = 10) None
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- load_metadata(directory: Path) dict
Load the metadata of a training directory.
Parameters:
- directorypathlib.Path
Path to folder where model is saved. For example ‘./experiments/mnist’.
- save(directory: Path) None
Save a model and corresponding metadata.
Parameters:
- directorypathlib.Path
Path to the directory where to save the data.
- save_metadata(directory: Path, **kwargs) None
Load the metadata of a training directory.
Parameters:
- directory: string
Path to folder where to save model. For example ‘./experiments/mnist’.
- kwargs:
Additional arguments to json.dump
- test_epoch(device: device, dataloader: DataLoader)
- train_epoch(device: device, dataloader: DataLoader, optimizer: Optimizer) ndarray
- training: bool
- class DecoderBurgess(img_size, latent_dim=10)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- forward(z)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class DecoderMnist(encoded_space_dim)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class EncoderBurgess(img_size, latent_dim=10)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- mu(x)
- training: bool
- class EncoderMnist(encoded_space_dim)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class SimCLR(base_encoder, projection_dim=128)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- fit(args: DictConfig, device: device) None
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- static get_color_distortion(s=0.5)
- static get_lr(step, total_steps, lr_max, lr_min)
Compute learning rate according to cosine annealing schedule.
- static nt_xent(x, t=0.5)
- training: bool
- class VAE(img_size: tuple, encoder: EncoderBurgess, decoder: DecoderBurgess, latent_dim: int, loss_f: BaseVAELoss, name: str = 'model')
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- fit(device: device, train_loader: DataLoader, test_loader: DataLoader, save_dir: Path, n_epoch: int = 30, patience: int = 10) None
- forward(x)
Forward pass of model.
Parameters:
- xtorch.Tensor
Batch of data. Shape (batch_size, n_chan, height, width)
- load_metadata(directory: Path) dict
Load the metadata of a training directory.
Parameters:
- directorypathlib.Path
Path to folder where model is saved. For example ‘./experiments/mnist’.
- reparameterize(mean, logvar)
Samples from a normal distribution using the reparameterization trick.
Parameters:
- meantorch.Tensor
Mean of the normal distribution. Shape (batch_size, latent_dim)
- logvartorch.Tensor
Diagonal log variance of the normal distribution. Shape (batch_size, latent_dim)
- sample_latent(x)
Returns a sample from the latent distribution.
Parameters:
- xtorch.Tensor
Batch of data. Shape (batch_size, n_chan, height, width)
- save(directory: Path) None
Save a model and corresponding metadata.
Parameters:
- directorypathlib.Path
Path to the directory where to save the data.
- save_metadata(directory: Path, **kwargs) None
Load the metadata of a training directory.
Parameters:
- directory: string
Path to folder where to save model. For example ‘./experiments/mnist’.
- kwargs:
Additional arguments to json.dump
- test_epoch(device: device, dataloader: DataLoader)
- train_epoch(device: device, dataloader: DataLoader, optimizer: Optimizer) ndarray
- training: bool
- class VarDecoderMnist(c: int = 64, latent_dims: int = 10)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool
- class VarEncoderMnist(c: int = 64, latent_dims: int = 10)
Bases:
Module
- _backward_hooks: Dict[int, Callable]
- _buffers: Dict[str, Optional[Tensor]]
- _forward_hooks: Dict[int, Callable]
- _forward_pre_hooks: Dict[int, Callable]
- _is_full_backward_hook: Optional[bool]
- _load_state_dict_post_hooks: Dict[int, Callable]
- _load_state_dict_pre_hooks: Dict[int, Callable]
- _modules: Dict[str, Optional[Module]]
- _non_persistent_buffers_set: Set[str]
- _parameters: Dict[str, Optional[Parameter]]
- _state_dict_hooks: Dict[int, Callable]
- forward(x)
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- mu(x)
- training: bool
- init_vae(img_size, latent_dim, loss_f, name)
Return an instance of a VAE with encoder and decoder from model_type.
- log_density_gaussian(x: Tensor, mu: Tensor, logvar: Tensor)
Computes the log pdf of the Gaussian with parameters mu and logvar at x
- Parameters
x – (Tensor) Point at whichGaussian PDF is to be evaluated
mu – (Tensor) Mean of the Gaussian distribution
logvar – (Tensor) Log variance of the Gaussian distribution