Models¶
-
torch_points3d.applications.sparseconv3d.
SparseConv3d
(architecture: str = None, input_nc: int = None, num_layers: int = None, config: omegaconf.DictConfig = None, backend: str = 'minkowski', *args, **kwargs)[source]¶ - Create a Sparse Conv backbone model based on architecture proposed in
https://arxiv.org/abs/1904.08755
- Two backends are available at the moment:
- architecturestr, optional
Architecture of the model, choose from unet, encoder and decoder
- input_ncint, optional
Number of channels for the input
- output_ncint, optional
If specified, then we add a fully connected head at the end of the network to provide the requested dimension
- num_layersint, optional
Depth of the network
- configDictConfig, optional
Custom config, overrides the num_layers and architecture parameters
- block:
Type of resnet block, ResBlock by default but can be any of the blocks in modules/SparseConv3d/modules.py
- backend:
torchsparse or minkowski
-
torch_points3d.applications.kpconv.
KPConv
(architecture: str = None, input_nc: int = None, num_layers: int = None, config: omegaconf.DictConfig = None, *args, **kwargs)[source]¶ Create a KPConv backbone model based on the architecture proposed in https://arxiv.org/abs/1904.08889
- Parameters
architecture (str, optional) – Architecture of the model, choose from unet, encoder and decoder
input_nc (int, optional) – Number of channels for the input
output_nc (int, optional) – If specified, then we add a fully connected head at the end of the network to provide the requested dimension
num_layers (int, optional) – Depth of the network
in_grid_size (float, optional) – Size of the grid at the entry of the network. It is divided by two at each layer
in_feat (int, optional) – Number of channels after the first convolution. Doubles at each layer
config (DictConfig, optional) – Custom config, overrides the num_layers and architecture parameters
-
torch_points3d.applications.pointnet2.
PointNet2
(architecture: str = None, input_nc: int = None, num_layers: int = None, config: omegaconf.DictConfig = None, multiscale=False, *args, **kwargs)[source]¶ - Create a PointNet2 backbone model based on the architecture proposed in
https://arxiv.org/abs/1706.02413
- architecturestr, optional
Architecture of the model, choose from unet, encoder and decoder
- input_ncint, optional
Number of channels for the input
- output_ncint, optional
If specified, then we add a fully connected head at the end of the network to provide the requested dimension
- num_layersint, optional
Depth of the network
- configDictConfig, optional
Custom config, overrides the num_layers and architecture parameters
-
torch_points3d.applications.rsconv.
RSConv
(architecture: str = None, input_nc: int = None, num_layers: int = None, config: omegaconf.DictConfig = None, *args, **kwargs)[source]¶ Create a RSConv backbone model based on the architecture proposed in https://arxiv.org/abs/1904.07601
- Parameters
architecture (str, optional) – Architecture of the model, choose from unet, encoder and decoder
input_nc (int, optional) – Number of channels for the input
output_nc (int, optional) – If specified, then we add a fully connected head at the end of the network to provide the requested dimension
num_layers (int, optional) – Depth of the network
config (DictConfig, optional) – Custom config, overrides the num_layers and architecture parameters