Skip to content

Torch Preprocessors

Torch transformers

TorchCenterCrop(size)

Bases: TorchBuiltInTransformer

Class that implements the CenterCrop Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer CenterCrop directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
113
114
def __init__(self, size: int):
    super().__init__(transforms.CenterCrop(size))

TorchColorJitter(brightness=0, contrast=0, saturation=0, hue=0)

Bases: TorchBuiltInTransformer

Class that implements the ColorJitter Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer ColorJitter directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
315
316
def __init__(self, brightness: Any = 0, contrast: Any = 0, saturation: Any = 0, hue: Any = 0):
    super().__init__(transforms.ColorJitter(brightness, contrast, saturation, hue))

TorchCompose(transforms_list)

Bases: TorchBuiltInTransformer

Class that implements the Compose Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer Compose directly from torchvision.

TorchVision documentation: here

The only difference w.r.t. the TorchVision implementation is that while the original implementation expects a list of Transformer objects as parameter, this implementation expects a list of ImageProcessor (so other image pre-processors) as parameter

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
88
89
def __init__(self, transforms_list: List[ImageProcessor]):
    super().__init__(transforms.Compose(transforms_list))

TorchConvertImageDtype(dtype)

Bases: TorchBuiltInTransformer

Class that implements the ConvertImageDtype Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer ConvertImageDtype directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
100
101
def __init__(self, dtype: torch.dtype):
    super().__init__(transforms.ConvertImageDtype(dtype))

TorchGaussianBlur(kernel_size, sigma=(0.1, 2.0))

Bases: TorchBuiltInTransformer

Class that implements the GaussianBlur Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer GaussianBlur directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
139
140
def __init__(self, kernel_size: int, sigma: Any = (0.1, 2.0)):
    super().__init__(transforms.GaussianBlur(kernel_size, sigma))

TorchGrayscale(num_output_channels=1)

Bases: TorchBuiltInTransformer

Class that implements the Grayscale Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer Grayscale directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
126
127
def __init__(self, num_output_channels: int = 1):
    super().__init__(transforms.Grayscale(num_output_channels))

TorchLambda(lambd)

Bases: TorchBuiltInTransformer

Class that implements the Lambda Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer Lambda directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
165
166
def __init__(self, lambd: callable):
    super().__init__(transforms.Lambda(lambd))

TorchLinearTransformation(transformation_matrix, mean_vector)

Bases: TorchBuiltInTransformer

Class that implements the LinearTransformation Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer LinearTransformation directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
303
304
def __init__(self, transformation_matrix: torch.Tensor, mean_vector: torch.Tensor):
    super().__init__(transforms.LinearTransformation(transformation_matrix, mean_vector))

TorchNormalize(mean, std)

Bases: TorchBuiltInTransformer

Class that implements the Normalize Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer Normalize directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
152
153
def __init__(self, mean: Any, std: Any):
    super().__init__(transforms.Normalize(mean, std, inplace=False))

TorchPad(padding, fill=0, padding_mode='constant')

Bases: TorchBuiltInTransformer

Class that implements the Pad Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer Pad directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
178
179
def __init__(self, padding: int, fill: int = 0, padding_mode: str = "constant"):
    super().__init__(transforms.Pad(padding, fill, padding_mode))

TorchRandomAdjustSharpness(sharpness_factor, p=0.5)

Bases: TorchBuiltInTransformer

Class that implements the RandomAdjustSharpness Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomAdjustSharpness directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
429
430
def __init__(self, sharpness_factor: float, p: float = 0.5):
    super().__init__(transforms.RandomAdjustSharpness(sharpness_factor, p))

TorchRandomAffine(degrees, translate=None, scale=None, shear=None, interpolation=InterpolationMode.NEAREST, fill=0, center=None)

Bases: TorchBuiltInTransformer

Class that implements the RandomAffine Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomAffine directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
340
341
342
343
344
def __init__(self, degrees: Any, translate: Any = None, scale: Any = None,
             shear: Any = None, interpolation: InterpolationMode = InterpolationMode.NEAREST,
             fill: Any = 0, center: Any = None):

    super().__init__(transforms.RandomAffine(degrees, translate, scale, shear, interpolation, fill, center))

TorchRandomApply(transforms_list, p=0.5)

Bases: TorchBuiltInTransformer

Class that implements the RandomApply Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomApply directly from torchvision.

TorchVision documentation: here

The only difference w.r.t. the TorchVision implementation is that while the original implementation expects a list of Transformer objects as parameter, this implementation expects a list of ImageProcessor (so other image pre-processors) as parameter

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
195
196
def __init__(self, transforms_list: List[ImageProcessor], p: float = 0.5):
    super().__init__(transforms.RandomApply(transforms_list, p))

TorchRandomAutocontrast(p=0.5)

Bases: TorchBuiltInTransformer

Class that implements the RandomAutocontrast Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomAutocontrast directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
441
442
def __init__(self, p: float = 0.5):
    super().__init__(transforms.RandomAutocontrast(p))

TorchRandomChoice(transforms_list, p=None)

Bases: TorchBuiltInTransformer

Class that implements the RandomChoice Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomChoice directly from torchvision.

TorchVision documentation: here

The only difference w.r.t. the TorchVision implementation is that while the original implementation expects a list of Transformer objects as parameter, this implementation expects a list of ImageProcessor (so other image pre-processors) as parameter

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
212
213
def __init__(self, transforms_list: List[ImageProcessor], p: Any = None):
    super().__init__(transforms.RandomChoice(transforms_list, p))

TorchRandomCrop(size, padding=None, pad_if_needed=False, fill=0, padding_mode='constant')

Bases: TorchBuiltInTransformer

Class that implements the RandomCrop Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomCrop directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
240
241
242
def __init__(self, size: int, padding: Any = None, pad_if_needed: bool = False,
             fill: tuple = 0, padding_mode: str = "constant"):
    super().__init__(transforms.RandomCrop(size, padding, pad_if_needed, fill, padding_mode))

TorchRandomEqualize(p=0.5)

Bases: TorchBuiltInTransformer

Class that implements the RandomEqualize Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomEqualize directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
453
454
def __init__(self, p: float = 0.5):
    super().__init__(transforms.RandomEqualize(p))

TorchRandomErasing(p=0.5, scale=(0.02, 0.33), ratio=(0.3, 3.3), value=0, inplace=False)

Bases: TorchBuiltInTransformer

Class that implements the RandomErasing Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomErasing directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
380
381
382
def __init__(self, p: float = 0.5, scale: Tuple[float, float] = (0.02, 0.33),
             ratio: Tuple[float, float] = (0.3, 3.3), value: int = 0, inplace: bool = False):
    super().__init__(transforms.RandomErasing(p, scale, ratio, value, inplace))

TorchRandomGrayscale(p=0.1)

Bases: TorchBuiltInTransformer

Class that implements the RandomGrayscale Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomGrayscale directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
355
356
def __init__(self, p: float = 0.1):
    super().__init__(transforms.RandomGrayscale(p))

TorchRandomHorizontalFlip(p=0.5)

Bases: TorchBuiltInTransformer

Class that implements the RandomHorizontalFlip Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomHorizontalFlip directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
253
254
def __init__(self, p: float = 0.5):
    super().__init__(transforms.RandomHorizontalFlip(p))

TorchRandomInvert(p=0.5)

Bases: TorchBuiltInTransformer

Class that implements the RandomInvert Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomInvert directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
393
394
def __init__(self, p: float = 0.5):
    super().__init__(transforms.RandomInvert(p))

TorchRandomOrder(transforms_list)

Bases: TorchBuiltInTransformer

Class that implements the RandomOrder Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomOrder directly from torchvision.

TorchVision documentation: here

The only difference w.r.t. the TorchVision implementation is that while the original implementation expects a list of Transformer objects as parameter, this implementation expects a list of ImageProcessor (so other image pre-processors) as parameter

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
228
229
def __init__(self, transforms_list: List[ImageProcessor]):
    super().__init__(transforms.RandomOrder(transforms_list))

TorchRandomPerspective(distortion_scale=0.5, p=0.5, interpolation=InterpolationMode.BILINEAR, fill=0)

Bases: TorchBuiltInTransformer

Class that implements the RandomPerspective Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomPerspective directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
367
368
369
def __init__(self, distortion_scale: float = 0.5, p: float = 0.5,
             interpolation: InterpolationMode = InterpolationMode.BILINEAR, fill: Any = 0):
    super().__init__(transforms.RandomPerspective(distortion_scale, p, interpolation, fill))

TorchRandomPosterize(bits, p=0.5)

Bases: TorchBuiltInTransformer

Class that implements the RandomPosterize Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomPosterize directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
405
406
def __init__(self, bits: int, p: float = 0.5):
    super().__init__(transforms.RandomPosterize(bits, p))

TorchRandomResizedCrop(size, scale=(0.08, 1.0), ratio=(3.0 / 4.0, 4.0 / 3.0), interpolation=InterpolationMode.BILINEAR, antialias=None)

Bases: TorchBuiltInTransformer

Class that implements the RandomResizedCrop Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomResizedCrop directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
277
278
279
def __init__(self, size: int, scale: Tuple[float] = (0.08, 1.0), ratio: Tuple[float] = (3.0 / 4.0, 4.0 / 3.0),
             interpolation: InterpolationMode = InterpolationMode.BILINEAR, antialias: Optional[bool] = None):
    super().__init__(transforms.RandomResizedCrop(size, scale, ratio, interpolation, antialias))

TorchRandomRotation(degrees, interpolation=InterpolationMode.NEAREST, expand=False, center=None, fill=0)

Bases: TorchBuiltInTransformer

Class that implements the RandomRotation Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomRotation directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
327
328
329
def __init__(self, degrees: Any, interpolation: InterpolationMode = InterpolationMode.NEAREST, expand: Any = False,
             center: Any = None, fill: Any = 0):
    super().__init__(transforms.RandomRotation(degrees, interpolation, expand, center, fill))

TorchRandomSolarize(threshold, p=0.5)

Bases: TorchBuiltInTransformer

Class that implements the RandomSolarize Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomSolarize directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
417
418
def __init__(self, threshold: float, p: float = 0.5):
    super().__init__(transforms.RandomSolarize(threshold, p))

TorchRandomVerticalFlip(p=0.5)

Bases: TorchBuiltInTransformer

Class that implements the RandomVerticalFlip Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandomVerticalFlip directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
265
266
def __init__(self, p: float = 0.5):
    super().__init__(transforms.RandomVerticalFlip(p))

TorchResize(size, interpolation=InterpolationMode.BILINEAR, max_size=None, antialias=None)

Bases: TorchBuiltInTransformer

Class that implements the Resize Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer Resize directly from torchvision.

TorchVision documentation: here

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_transformer.py
290
291
292
def __init__(self, size: int, interpolation=InterpolationMode.BILINEAR, max_size: Any = None,
             antialias: Any = None):
    super().__init__(transforms.Resize(size, interpolation, max_size, antialias))

Torch augmenters

TorchAutoAugment(policy=AutoAugmentPolicy.IMAGENET, interpolation=InterpolationMode.NEAREST, fill=None)

Bases: TorchBuiltInTransformer

Class that implements the AutoAugment Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer AutoAugment directly from torchvision.

TorchVision documentation: here

NOTE: the augmented result will SUBSTITUTE the original input

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_augmenter.py
29
30
31
32
33
def __init__(self, policy: AutoAugmentPolicy = AutoAugmentPolicy.IMAGENET,
             interpolation: InterpolationMode = InterpolationMode.NEAREST,
             fill: Optional[List[float]] = None):

    super().__init__(transforms.AutoAugment(policy, interpolation, fill))

TorchRandAugment(num_ops=2, magnitude=9, num_magnitude_bins=31, interpolation=InterpolationMode.NEAREST, fill=None)

Bases: TorchBuiltInTransformer

Class that implements the RandAugment Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer RandAugment directly from torchvision.

TorchVision documentation: here

NOTE: the augmented result will SUBSTITUTE the original input

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_augmenter.py
46
47
48
49
50
51
52
53
54
def __init__(
    self,
    num_ops: int = 2,
    magnitude: int = 9,
    num_magnitude_bins: int = 31,
    interpolation: InterpolationMode = InterpolationMode.NEAREST,
    fill: Optional[List[float]] = None,
) -> None:
    super().__init__(transforms.RandAugment(num_ops, magnitude, num_magnitude_bins, interpolation, fill))

TorchTrivialAugmentWide(num_magnitude_bins=31, interpolation=InterpolationMode.NEAREST, fill=None)

Bases: TorchBuiltInTransformer

Class that implements the TrivialAugmentWide Transformer from torchvision. The parameters one could pass are the same ones you would pass instantiating the transformer TrivialAugmentWide directly from torchvision.

TorchVision documentation: here

NOTE: the augmented result will SUBSTITUTE the original input

Source code in clayrs/content_analyzer/information_processor/visual_preprocessors/torch_builtin_augmenter.py
67
68
69
70
71
72
73
def __init__(
    self,
    num_magnitude_bins: int = 31,
    interpolation: InterpolationMode = InterpolationMode.NEAREST,
    fill: Optional[List[float]] = None,
) -> None:
    super().__init__(transforms.TrivialAugmentWide(num_magnitude_bins, interpolation, fill))