Skip to content

Conversation

@antimora
Copy link
Collaborator

@antimora antimora commented Dec 29, 2025

  • Adds support for asymmetric (explicit) padding in Conv1d, Conv2d, Conv3d, MaxPool1d, MaxPool2d, AvgPool1d, and AvgPool2d operations
  • Extends PaddingConfig1d and PaddingConfig2d with Explicit variants that specify per-side padding values
  • Updates ONNX import to correctly parse and convert asymmetric padding from ONNX models

Checklist

  • Confirmed that cargo run-checks command has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

Fixes #4253
Fixes #1370
Related #2676

Changes

burn-nn

  • Extended PaddingConfig1d with Explicit(left, right) variant
  • Extended PaddingConfig2d with Explicit(top, left, bottom, right) variant
  • Added forward pass implementations for asymmetric padding in all conv and pool modules
  • Added comprehensive unit tests for asymmetric padding

onnx-ir

  • Updated padding parsing to preserve full asymmetric padding values from ONNX pads attribute
  • Changed from symmetric-only [height, width] to full [top, left, bottom, right] representation

burn-import

  • Updated codegen to emit PaddingConfig::Explicit when ONNX models use asymmetric padding
  • Added ONNX integration tests with explicitly constructed ONNX models (no separate Pad nodes)

Testing

  • Unit tests for asymmetric padding in burn-nn (Conv1d, Conv2d, MaxPool1d, MaxPool2d, AvgPool1d, AvgPool2d)
  • ONNX integration tests with ReferenceEvaluator/InferenceSession verification
  • cargo test --package onnx-tests -- asymmetric passes

Breaking / Migration

PaddingConfig's Explicit variant expanded. For a quick migration of existing code, duplicate each padding value:

  • Explicit(n) -> Explicit(n, n)
  • Explicit(h, w) -> Explicit(h, w, h, w)
  • Explicit(d, h, w) -> Explicit(d, h, w, d, h, w)

Follow-up / left-overs

This commit updates the padding configuration for 1D and 2D convolution and pooling modules to support asymmetric padding. The code now detects asymmetric padding and applies explicit padding using the pad operation before the main convolution or pooling operation. Tests and code generation are updated to handle and verify asymmetric padding cases. Asymmetric 3D padding is not supported and will panic at runtime.
Introduces new ONNX models and Rust/py scripts to test asymmetric padding for AvgPool1d/2d, MaxPool1d/2d, and Conv1d/2d. Updates build.rs to include these models, adds dependency on onnxscript, and extends test modules to verify output shapes and sums for correctness.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive support for asymmetric (per-side) padding in convolution and pooling operations, addressing limitations where only symmetric padding was previously supported. The implementation correctly handles asymmetric padding by applying an explicit pad operation before the convolution/pooling operation.

Key Changes:

  • Extended PaddingConfig1d, PaddingConfig2d, and PaddingConfig3d to support per-side padding values (breaking API change)
  • Implemented asymmetric padding handling in Conv1d, Conv2d, MaxPool1d, MaxPool2d, AvgPool1d, and AvgPool2d (Conv3d panics with clear error as 3D padding is not yet supported)
  • Updated ONNX import pipeline to correctly parse and convert asymmetric padding from ONNX models
  • Added comprehensive test coverage including unit tests and ONNX integration tests

Reviewed changes

Copilot reviewed 35 out of 41 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
crates/onnx-ir/src/node/padding.rs Extended PaddingConfig enums to support per-side padding, removed asymmetric padding restrictions, added helper methods
crates/onnx-ir/src/node/{conv,pool}*.rs Updated tests to match new PaddingConfig API signatures
crates/burn-nn/src/padding.rs Extended PaddingConfig enums, added is_asymmetric() and as_tuple() methods, updated documentation
crates/burn-nn/src/modules/conv/*.rs Implemented asymmetric padding via explicit pad operations for Conv1d/Conv2d, added panic for Conv3d
crates/burn-nn/src/modules/pool/*.rs Implemented asymmetric padding via explicit pad operations with appropriate pad values for each pooling type
crates/burn-import/src/burn/codegen.rs Updated ToTokens implementations to generate code for new PaddingConfig signatures
crates/burn-import/src/burn/node/*.rs Updated codegen for conv and pool nodes to handle asymmetric padding
crates/burn-import/onnx-tests/tests/* Added comprehensive integration tests with PyTorch/ONNX models for asymmetric padding
crates/burn-import/onnx-tests/build.rs Registered new test model files
crates/burn-import/onnx-tests/pyproject.toml Added onnxscript dependency

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Update to use 4-argument format (top, left, bottom, right) for explicit padding.
Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of changes needed, on top of the compilation error due to breaking Explicit(...) enum variant change.

@khengari77
Copy link

Hi @antimora @laggui,

I've also been looking into this to support models like Facial-Landmark-Detection which use asymmetric padding and I was working on.

Since native support in burn-nn involves breaking changes to the Explicit enum variants, I experimented with an alternative decomposition strategy strictly within burn-import.

Instead of changing the runtime PaddingConfig, my approach handles it during ONNX import code generation:

  1. onnx-ir: Parses asymmetric padding into a new internal Asymmetric variant (non-breaking for burn-core).
  2. burn-import: When generating code for Conv2d/Pool2d:
    • Configures the layer with PaddingConfig2d::Valid.
    • Injects a tensor.pad(...) call immediately before the layer in the generated forward method.

This allows existing Burn versions to run these models without waiting for a breaking release or modifying the core tensor API. I have a branch with the initial code as a PoC. I might need to add the tests but it's pretty ready already

@khengari77
Copy link

Hi @antimora @laggui,

I've also been looking into this to support models like Facial-Landmark-Detection which use asymmetric padding and I was working on.

Since native support in burn-nn involves breaking changes to the Explicit enum variants, I experimented with an alternative decomposition strategy strictly within burn-import.

Instead of changing the runtime PaddingConfig, my approach handles it during ONNX import code generation:

  1. onnx-ir: Parses asymmetric padding into a new internal Asymmetric variant (non-breaking for burn-core).

  2. burn-import: When generating code for Conv2d/Pool2d:

    • Configures the layer with PaddingConfig2d::Valid.
    • Injects a tensor.pad(...) call immediately before the layer in the generated forward method.

This allows existing Burn versions to run these models without waiting for a breaking release or modifying the core tensor API. I have a branch with the initial code as a PoC. I might need to add the tests but it's pretty ready already

Here's my commit on my fork. I think this is the path is the one of least resistance and less breaking changes: 258d47b

@antimora
Copy link
Collaborator Author

antimora commented Jan 4, 2026

@khengari77 thanks for looking into this as well. I thought going this route but it only benefits ONNX users and if users want to shift to Burn API then they need to do padding manually.

I believe for this release we are okay to introduce a breaking change.

@khengari77
Copy link

@khengari77 thanks for looking into this as well. I thought going this route but it only benefits ONNX users and if users want to shift to Burn API then they need to do padding manually.

I believe for this release we are okay to introduce a breaking change.

Actually I think you can create an abstraction over padding that internally decomposes in the same way. Again this will not introduce any breaking changes. Anyways if you see that the other choice is better in the long term then that's something else. Thanks for your response.

@laggui
Copy link
Member

laggui commented Jan 5, 2026

Actually I think you can create an abstraction over padding that internally decomposes in the same way. Again this will not introduce any breaking changes. Anyways if you see that the other choice is better in the long term then that's something else. Thanks for your response.

You mean having an Asymmetric variant? e.g.

#[derive(Config, Debug, PartialEq)]
pub enum PaddingConfig1d {
    Same,
    Valid,
    Explicit(usize), // i.e., symmetric
    Asymmetric(usize, usize),
}

#[derive(Config, Debug, PartialEq)]
pub enum PaddingConfig2d {
    Same,
    Valid,
    Explicit(usize, usize), // i.e., symmetric
    Asymmetric(usize, usize, usize, usize),
}

that's not a bad alternative, but if we go with a separation between explicit symmetric and asymmetric padding I would still rename Explicit -> Symmetric which is also a breaking change.

@khengari77
Copy link

but if we go with a separation between explicit symmetric and asymmetric padding I would still rename Explicit -> Symmetric which is also a breaking change

@laggui Quite what I was trying to say. I think you can add Symmetric side by side with Explicit and just mark it as deprecated and then remove it a few versions later. Just an opinion.

@laggui
Copy link
Member

laggui commented Jan 5, 2026

One problem though with Symmetric and Asymmetric is that symmetry is a property that emerges from the values provided, not a fundamentally different kind of padding operation. Both represent explicit padding values. If you have PaddingConfig1d::Asymmetric(2, 2), that's just symmetric anyway, which is a bit redundant.

So I think having a single Explicit variant is still preferable, and we can provide constructor methods e.g. PaddingConfig1d::symmetric(pad) as a shorthand.

Address review feedback from @laggui:
- Extended ConvOptions with optional `padding_out` field for asymmetric padding
- Moved asymmetric padding logic from module forward() to functional conv1d/conv2d
- Functional layer now applies explicit pad operation when padding is asymmetric
- Module layer uses ConvOptions::with_padding_out() instead of manual padding

This ensures users calling functional conv1d/conv2d directly also benefit from
asymmetric padding support, not just module users.
Added TODO comments in avg_pool1d, avg_pool2d, max_pool1d, and max_pool2d modules to indicate that handling of asymmetric padding should be moved to the functional level via PoolOptions, referencing issue tracel-ai#4362. Also performed a minor import formatting cleanup in conv1d.rs.
Add calculate_padding_*_pairs() methods that compute asymmetric padding
for Same mode when the total padding is odd (e.g., even kernel sizes).
This follows ONNX convention where extra padding goes to the end.

Changes:
- Add calculate_same_padding() helper in padding.rs
- Add calculate_padding_1d_pair(), calculate_padding_2d_pairs(),
  calculate_padding_3d_pairs() methods
- Update conv1d, conv2d, and all pool modules to use pairs methods
- Remove check_same_padding_support restriction from these modules
- Same padding with even kernels now works correctly
@antimora antimora changed the title Add asymmetric padding support for conv and pool operations [BREAKING] Add asymmetric padding support for conv and pool operations Jan 21, 2026
@antimora antimora requested review from Copilot and laggui January 21, 2026 17:47
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 43 out of 49 changed files in this pull request and generated 9 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Update to new 4-argument format: Explicit(top, left, bottom, right)
- Clarify dead_code comment in padding.rs
- Update pad dimension comments in module.rs for 1D and 2D
- Clarify pad comment in max_pool1d.rs
- Update test comments in maxpool and avg_pool tests with ONNX pads format
Resolve file location conflicts for asymmetric padding test files
after burn-import/onnx-tests was renamed to burn-onnx/onnx-tests
in PR tracel-ai#4361.
@antimora
Copy link
Collaborator Author

@laggui, ready for your review; I have updated/merged with burn-onnx rename.

@codecov
Copy link

codecov bot commented Jan 21, 2026

Codecov Report

❌ Patch coverage is 96.85349% with 32 lines in your changes missing coverage. Please review.
✅ Project coverage is 68.79%. Comparing base (5843c6e) to head (622d51e).

Files with missing lines Patch % Lines
crates/onnx-ir/src/node/padding.rs 86.32% 16 Missing ⚠️
crates/burn-nn/src/padding.rs 97.93% 5 Missing ⚠️
crates/burn-ir/src/operation.rs 0.00% 3 Missing ⚠️
crates/burn-nn/src/modules/conv/conv3d.rs 40.00% 3 Missing ⚠️
crates/burn-onnx/onnx-tests/tests/avg_pool/mod.rs 92.85% 2 Missing ⚠️
crates/burn-onnx/onnx-tests/tests/maxpool/mod.rs 92.85% 2 Missing ⚠️
crates/burn-tensor/src/tensor/module.rs 97.43% 1 Missing ⚠️

❌ Your project check has failed because the head coverage (68.79%) is below the target coverage (80.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4263      +/-   ##
==========================================
+ Coverage   68.65%   68.79%   +0.14%     
==========================================
  Files        1411     1411              
  Lines      168676   169509     +833     
==========================================
+ Hits       115804   116620     +816     
- Misses      52872    52889      +17     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@laggui, ready for your review; I have updated/merged with burn-onnx rename.

Thanks!

Comment on lines +104 to +107
/// Padding at the end of each dimension (bottom/right for 2D).
/// If `None`, uses the same values as `padding` (symmetric).
/// If `Some`, enables asymmetric padding where start != end.
pub padding_out: Option<[usize; N]>,
Copy link
Member

@laggui laggui Jan 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm not sure this is required to allow asymmetric padding, but I'll have to review this more carefully on Monday because too many changes have been made since my last review comment

@laggui laggui self-requested a review January 23, 2026 20:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Enhance existing features onnx

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Asymmetric padding is not supported Asymmetric padding is not supported

3 participants