Skip to content

Commit

Permalink
Passes pre-commit hooks (#1514)
Browse files Browse the repository at this point in the history
* Passes pre-commit hooks

* lints and tests pass

* cleanup in pypriject

* Cleanup

* Disabled tests for serialization v2
  • Loading branch information
ternaus authored Feb 15, 2024
1 parent a6c2c34 commit e517d56
Show file tree
Hide file tree
Showing 40 changed files with 2,398 additions and 1,790 deletions.
4 changes: 3 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,9 @@ jobs:
if: matrix.operating-system == 'macos-latest'
run: pip install torch==2.0.1 torchvision==0.15.2
- name: Install dependencies
run: pip install .[tests]
run: |
pip install .[tests]
pip install imgaug
- name: Cleanup the build directory
uses: JesseTG/[email protected]
with:
Expand Down
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -158,3 +158,6 @@ fabric.properties
.idea

conda_build/

.vscode/
conda.recipe/
79 changes: 59 additions & 20 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,37 +1,76 @@
ci:
autofix_commit_msg: |
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
autofix_prs: true
autoupdate_branch: ''
autoupdate_commit_msg: '[pre-commit.ci] pre-commit autoupdate'
autoupdate_schedule: weekly
skip: [ ]
submodules: false

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: requirements-txt-fixer
- id: check-json
- id: check-yaml
- id: check-added-large-files
- id: check-ast
- id: check-builtin-literals
- id: check-case-conflict
- id: check-docstring-first
- id: check-executables-have-shebangs
- id: check-shebang-scripts-are-executable
- id: check-symlinks
- id: check-toml
- id: check-xml
- id: detect-private-key
- id: forbid-new-submodules
- id: forbid-submodules
- id: mixed-line-ending
- id: destroyed-symlinks
- id: fix-byte-order-marker
- id: pretty-format-json
- id: check-json
- id: check-yaml
args: [ --unsafe ]
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace
- id: requirements-txt-fixer
- repo: https://github.com/asottile/pyupgrade
rev: v3.15.0
hooks:
- id: pyupgrade
args: ["--py38-plus"]
- repo: https://github.com/pycqa/isort
rev: 5.11.5
rev: 5.13.2
hooks:
- id: isort
args: [ "--profile", "black" ]
- repo: https://github.com/psf/black
rev: 22.6.0
rev: 24.2.0
hooks:
- id: black
args: [ --config=black.toml ]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v0.991
args: [ --config=pyproject.toml ]
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.10.0
hooks:
- id: mypy
additional_dependencies: [ types-PyYAML, types-pkg-resources, types-setuptools ]
args:
[
--ignore-missing-imports,
--warn-no-return,
--warn-redundant-casts,
]
- id: python-check-mock-methods
- id: python-use-type-annotations
- id: python-check-blanket-noqa
- id: python-use-type-annotations
- id: text-unicode-replacement-char
- repo: https://github.com/PyCQA/flake8
rev: 21d3c70d676007470908d39b73f0521d39b3b997
rev: 7.0.0
hooks:
- id: flake8
additional_dependencies: [ flake8-docstrings==1.6.0 ]
exclude: ^setup.py
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.8.0
hooks:
- id: mypy
files: ^albumentations/
additional_dependencies: [ types-PyYAML]
args:
[ --config-file=pyproject.toml ]
44 changes: 26 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,24 +17,32 @@ Here is an example of how you can apply some [pixel-level](#pixel-level-transfor
- The library is [**widely used**](#who-is-using-albumentations) in industry, deep learning research, machine learning competitions, and open source projects.

## Table of contents
- [Authors](#authors)
- [Installation](#installation)
- [Documentation](#documentation)
- [A simple example](#a-simple-example)
- [Getting started](#getting-started)
- [I am new to image augmentation](#i-am-new-to-image-augmentation)
- [I want to use Albumentations for the specific task such as classification or segmentation](#i-want-to-use-albumentations-for-the-specific-task-such-as-classification-or-segmentation)
- [I want to know how to use Albumentations with deep learning frameworks](#i-want-to-know-how-to-use-albumentations-with-deep-learning-frameworks)
- [I want to explore augmentations and see Albumentations in action](#i-want-to-explore-augmentations-and-see-albumentations-in-action)
- [Who is using Albumentations](#who-is-using-albumentations)
- [List of augmentations](#list-of-augmentations)
- [Pixel-level transforms](#pixel-level-transforms)
- [Spatial-level transforms](#spatial-level-transforms)
- [A few more examples of augmentations](#a-few-more-examples-of-augmentations)
- [Benchmarking results](#benchmarking-results)
- [Contributing](#contributing)
- [Comments](#comments)
- [Citing](#citing)
- [Albumentations](#albumentations)
- [Why Albumentations](#why-albumentations)
- [Table of contents](#table-of-contents)
- [Authors](#authors)
- [Installation](#installation)
- [Documentation](#documentation)
- [A simple example](#a-simple-example)
- [Getting started](#getting-started)
- [I am new to image augmentation](#i-am-new-to-image-augmentation)
- [I want to use Albumentations for the specific task such as classification or segmentation](#i-want-to-use-albumentations-for-the-specific-task-such-as-classification-or-segmentation)
- [I want to know how to use Albumentations with deep learning frameworks](#i-want-to-know-how-to-use-albumentations-with-deep-learning-frameworks)
- [I want to explore augmentations and see Albumentations in action](#i-want-to-explore-augmentations-and-see-albumentations-in-action)
- [Who is using Albumentations](#who-is-using-albumentations)
- [See also:](#see-also)
- [List of augmentations](#list-of-augmentations)
- [Pixel-level transforms](#pixel-level-transforms)
- [Spatial-level transforms](#spatial-level-transforms)
- [A few more examples of augmentations](#a-few-more-examples-of-augmentations)
- [Semantic segmentation on the Inria dataset](#semantic-segmentation-on-the-inria-dataset)
- [Medical imaging](#medical-imaging)
- [Object detection and semantic segmentation on the Mapillary Vistas dataset](#object-detection-and-semantic-segmentation-on-the-mapillary-vistas-dataset)
- [Keypoints augmentation](#keypoints-augmentation)
- [Benchmarking results](#benchmarking-results)
- [Contributing](#contributing)
- [Comments](#comments)
- [Citing](#citing)

## Authors
[**Alexander Buslaev** — Computer Vision Engineer at Mapbox](https://www.linkedin.com/in/al-buslaev/) | [Kaggle Master](https://www.kaggle.com/albuslaev)
Expand Down
2 changes: 0 additions & 2 deletions albumentations/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
from __future__ import absolute_import

__version__ = "1.3.1"

from .augmentations import *
Expand Down
50 changes: 23 additions & 27 deletions albumentations/augmentations/blur/transforms.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,15 @@
import random
import warnings
from typing import Any, Dict, List, Sequence, Tuple
from typing import Any, Dict, List, Optional, Sequence, Tuple, cast

import cv2
import numpy as np

from albumentations import random_utils
from albumentations.augmentations import functional as FMain
from albumentations.augmentations.blur import functional as F
from albumentations.core.transforms_interface import (
ImageOnlyTransform,
ScaleFloatType,
ScaleIntType,
to_tuple,
)
from albumentations.core.transforms_interface import ImageOnlyTransform, to_tuple
from albumentations.core.types import ScaleFloatType, ScaleIntType

__all__ = ["Blur", "MotionBlur", "GaussianBlur", "GlassBlur", "AdvancedBlur", "MedianBlur", "Defocus", "ZoomBlur"]

Expand All @@ -22,9 +18,9 @@ class Blur(ImageOnlyTransform):
"""Blur the input image using a random-sized kernel.
Args:
blur_limit (int, (int, int)): maximum kernel size for blurring the input image.
blur_limit: maximum kernel size for blurring the input image.
Should be in range [3, inf). Default: (3, 7).
p (float): probability of applying the transform. Default: 0.5.
p: probability of applying the transform. Default: 0.5.
Targets:
image
Expand All @@ -35,10 +31,10 @@ class Blur(ImageOnlyTransform):

def __init__(self, blur_limit: ScaleIntType = 7, always_apply: bool = False, p: float = 0.5):
super().__init__(always_apply, p)
self.blur_limit = to_tuple(blur_limit, 3)
self.blur_limit = cast(Tuple[int, int], to_tuple(blur_limit, 3))

def apply(self, img: np.ndarray, ksize: int = 3, **params) -> np.ndarray:
return F.blur(img, ksize)
def apply(self, img: np.ndarray, kernel: int = 3, **params: Any) -> np.ndarray:
return F.blur(img, kernel)

def get_params(self) -> Dict[str, Any]:
return {"ksize": int(random.choice(list(range(self.blur_limit[0], self.blur_limit[1] + 1, 2))))}
Expand Down Expand Up @@ -80,21 +76,21 @@ def __init__(
def get_transform_init_args_names(self) -> Tuple[str, ...]:
return super().get_transform_init_args_names() + ("allow_shifted",)

def apply(self, img: np.ndarray, kernel: np.ndarray = None, **params) -> np.ndarray: # type: ignore
def apply(self, img: np.ndarray, kernel: Optional[np.ndarray] = None, **params: Any) -> np.ndarray:
return FMain.convolve(img, kernel=kernel)

def get_params(self) -> Dict[str, Any]:
ksize = random.choice(list(range(self.blur_limit[0], self.blur_limit[1] + 1, 2)))
if ksize <= 2:
raise ValueError("ksize must be > 2. Got: {}".format(ksize))
raise ValueError(f"ksize must be > 2. Got: {ksize}")
kernel = np.zeros((ksize, ksize), dtype=np.uint8)
x1, x2 = random.randint(0, ksize - 1), random.randint(0, ksize - 1)
if x1 == x2:
y1, y2 = random.sample(range(ksize), 2)
else:
y1, y2 = random.randint(0, ksize - 1), random.randint(0, ksize - 1)

def make_odd_val(v1, v2):
def make_odd_val(v1: int, v2: int) -> Tuple[int, int]:
len_v = abs(v1 - v2) + 1
if len_v % 2 != 1:
if v2 > v1:
Expand All @@ -113,8 +109,8 @@ def make_odd_val(v1, v2):
center = ksize / 2 - 0.5
dx = xc - center
dy = yc - center
x1, x2 = [int(i - dx) for i in [x1, x2]]
y1, y2 = [int(i - dy) for i in [y1, y2]]
x1, x2 = (int(i - dx) for i in [x1, x2])
y1, y2 = (int(i - dy) for i in [y1, y2])

cv2.line(kernel, (x1, y1), (x2, y2), 1, thickness=1)

Expand Down Expand Up @@ -143,8 +139,8 @@ def __init__(self, blur_limit: ScaleIntType = 7, always_apply: bool = False, p:
if self.blur_limit[0] % 2 != 1 or self.blur_limit[1] % 2 != 1:
raise ValueError("MedianBlur supports only odd blur limits.")

def apply(self, img: np.ndarray, ksize: int = 3, **params) -> np.ndarray:
return F.median_blur(img, ksize)
def apply(self, img: np.ndarray, kernel: int = 3, **params: Any) -> np.ndarray:
return F.median_blur(img, kernel)


class GaussianBlur(ImageOnlyTransform):
Expand Down Expand Up @@ -176,7 +172,7 @@ def __init__(
p: float = 0.5,
):
super().__init__(always_apply, p)
self.blur_limit = to_tuple(blur_limit, 0)
self.blur_limit = cast(Tuple[int, int], to_tuple(blur_limit, 0))
self.sigma_limit = to_tuple(sigma_limit if sigma_limit is not None else 0, 0)

if self.blur_limit[0] == 0 and self.sigma_limit[0] == 0:
Expand All @@ -191,7 +187,7 @@ def __init__(
):
raise ValueError("GaussianBlur supports only odd blur limits.")

def apply(self, img: np.ndarray, ksize: int = 3, sigma: float = 0, **params) -> np.ndarray:
def apply(self, img: np.ndarray, ksize: int = 3, sigma: float = 0, **params: Any) -> np.ndarray:
return F.gaussian_blur(img, ksize, sigma=sigma)

def get_params(self) -> Dict[str, float]:
Expand Down Expand Up @@ -258,7 +254,7 @@ def get_params_dependent_on_targets(self, params: Dict[str, Any]) -> Dict[str, n
# generate array containing all necessary values for transformations
width_pixels = img.shape[0] - self.max_delta * 2
height_pixels = img.shape[1] - self.max_delta * 2
total_pixels = width_pixels * height_pixels
total_pixels = int(width_pixels * height_pixels)
dxy = random_utils.randint(-self.max_delta, self.max_delta, size=(total_pixels, self.iterations, 2))

return {"dxy": dxy}
Expand Down Expand Up @@ -315,7 +311,7 @@ def __init__(
p: float = 0.5,
):
super().__init__(always_apply, p)
self.blur_limit = to_tuple(blur_limit, 3)
self.blur_limit = cast(Tuple[int, int], to_tuple(blur_limit, 3))
self.sigmaX_limit = self.__check_values(to_tuple(sigmaX_limit, 0.0), name="sigmaX_limit")
self.sigmaY_limit = self.__check_values(to_tuple(sigmaY_limit, 0.0), name="sigmaY_limit")
self.rotate_limit = to_tuple(rotate_limit)
Expand All @@ -341,7 +337,7 @@ def __check_values(
raise ValueError(f"{name} values should be between {bounds}")
return value

def apply(self, img: np.ndarray, kernel: np.ndarray = np.array(None), **params) -> np.ndarray:
def apply(self, img: np.ndarray, kernel: np.ndarray = np.array(None), **params: Any) -> np.ndarray:
return FMain.convolve(img, kernel=kernel)

def get_params(self) -> Dict[str, np.ndarray]:
Expand Down Expand Up @@ -372,7 +368,7 @@ def get_params(self) -> Dict[str, np.ndarray]:
# Described in "Parameter Estimation For Multivariate Generalized Gaussian Distributions"
kernel = np.exp(-0.5 * np.power(np.sum(np.dot(grid, inverse_sigma) * grid, 2), beta))
# Add noise
kernel = kernel * noise_matrix
kernel *= noise_matrix

# Normalize kernel
kernel = kernel.astype(np.float32) / np.sum(kernel)
Expand Down Expand Up @@ -424,7 +420,7 @@ def __init__(
if self.alias_blur[0] < 0:
raise ValueError("Parameter alias_blur must be non-negative")

def apply(self, img: np.ndarray, radius: int = 3, alias_blur: float = 0.5, **params) -> np.ndarray:
def apply(self, img: np.ndarray, radius: int = 3, alias_blur: float = 0.5, **params: Any) -> np.ndarray:
return F.defocus(img, radius, alias_blur)

def get_params(self) -> Dict[str, Any]:
Expand Down Expand Up @@ -473,7 +469,7 @@ def __init__(
if self.step_factor[0] <= 0:
raise ValueError("Step factor must be positive")

def apply(self, img: np.ndarray, zoom_factors: np.ndarray = np.array(None), **params) -> np.ndarray:
def apply(self, img: np.ndarray, zoom_factors: np.ndarray = np.array(None), **params: Any) -> np.ndarray:
assert zoom_factors is not None
return F.zoom_blur(img, zoom_factors)

Expand Down
Loading

0 comments on commit e517d56

Please sign in to comment.